0% found this document useful (0 votes)
20 views122 pages

PCM 219 Process Control and Instrumentation

The document discusses the process industry, emphasizing the importance of process control in manufacturing goods through systematic and controlled processes. It outlines the role of measurement instruments, their classifications, and key concepts such as accuracy, sensitivity, and precision, which are crucial for effective instrumentation and control. Additionally, it explains the static characteristics of instruments, including error types and their implications in measurement accuracy.

Uploaded by

gsmgcyr278
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views122 pages

PCM 219 Process Control and Instrumentation

The document discusses the process industry, emphasizing the importance of process control in manufacturing goods through systematic and controlled processes. It outlines the role of measurement instruments, their classifications, and key concepts such as accuracy, sensitivity, and precision, which are crucial for effective instrumentation and control. Additionally, it explains the static characteristics of instruments, including error types and their implications in measurement accuracy.

Uploaded by

gsmgcyr278
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 122

PCM 219

Instrumentation & Petroleum Process Control


I
DSUST-PCM 219
The process industry refers to a sector of manufacturing that involves the
production of goods through a series of steps or processes. Unlike discrete
manufacturing, where individual items are produced separately, process
manufacturing involves the continuous or batch production of goods such as
chemicals, food, pharmaceuticals, and fuels.

In this industry, raw materials undergo various chemical or mechanical


transformations to create the final product. The emphasis is on controlled and
systematic processes, often involving precise measurements and specific
conditions. Process industries play a pivotal role in providing essential
products that cater to various aspects of our daily lives and contribute
significantly to the global economy.

In a nutshell, Process control is used in continuous production – in


manufacturing and in other fields and industries where some kind of material
is produced without any kind of interruption – as well as in “batch
processing.” It’s used to automatically control the conditions in which a
product is made – ensuring better quality and efficiency.

Essentially then, process control is all about eliminating human feedback –


and allowing the advanced, automated systems of an industrial plant to
handle minor adjustments automatically, without intervention beyond human
monitoring of each system.
Some examples of process industries are:
 Bulk – drug Pharmaceuticals,
 Chemical, Tire, and Process industries (CTP),
 Cosmeceuticals and Personal Care,
 Food and Beverages, Food Processing,
 Nutraceuticals,
 Paints and Coatings,
 Semiconductor fabrication,
 Specialty chemicals,
 Steel and Aluminium Processing,
 Textiles,
 Waste Management, etc.
Instrumentation and control are interdisciplinary fields. They require
knowledge of chemistry, mechanics, electricity and magnetism, electronics,
microcontrollers and microprocessors, software languages, process control, and even
more such as the principles of pneumatics and hydraulics and communications.

This is what makes instrumentation and control so interesting and instructive.

A Measurement Instrument

A measurement instrument is a device capable of detecting change,


physical or otherwise, in a particular process. It then converts these
physical changes into some form of information understandable by
the user. Consider the example below:

An example of a measurement instrument

When the switch is closed, the resistor generates heat, increasing the temperature of
the liquid in the tank. This increase is detected by the measurement instrument and
shown on the scale of that instrument.

We can get the information on the physical changes in a process using direct
indication or a recorder.

Indication
This is the simplest form of measurement; it allows us to know the current state of
the variable.
Monitoring a variable via indication

Recorder
A device that can store data allows us to observe the current state
of the variable and how it behaved in the past. A recorder provides
us with the history of the variable.
A display showing how measurements have changed
over time
Elements of a Measurement Instrument
Measurement instruments consist primarily of the
following parts:
 Sensor: This element is a device that experiences
changes in its physical properties as a result of
changes in the process it's measuring.
 Amplifier / Conditioner: Changes detected by the
sensor may be very small, so they must be amplified
and then conditioned such that they can be properly
displayed.
 Display: The measured data should be presented in
an understandable way. This can be done using a
graduated instrument or an electronic display.
Sometimes the display additionally acts as a recorder
in order to convey the measurement's history or
trends.

Elements of a measurement instrument

Usually, the measurement information generated by an instrument must


be sent to a control center (or control room) that is physically distant from the
instrument. In general, this information must conform to established
specifications.
Measurement information is sent from the instrument to the control room

When an instrument has the ability to send information, we call it a


transmitter (XMTR).

Classification of Instruments
There are different classifications for measurement instruments. We can
classify them, for example, as in-field instruments or panel instruments. The
in-field instrument is installed close to the process or measuring point. It must
be physically robust if it will be exposed to harsh environmental conditions.
Panel instruments are in a controlled-environment room (often a clean space
with air conditioning and controlled humidity).
Another classification is pneumatic instruments and electrical/electronic
instruments.

Pneumatic Instruments
As the name suggests, these are devices that are powered by air.
One of the advantages of these instruments is that they do not consume
electricity, so they can be used in areas where it would be dangerous or
inconvenient to use electrical power. They work with a single variable, are
imprecise instruments, are affected by vibrations and temperature changes,
and have high maintenance requirements. The output signal of the
transmitters is between 3 and 15 psi, and the maximum transmission
distance is approximately 200 meters.
Basic diagram of a pneumatic instrument

Electrical / Electronic Instruments


Electronic instruments can be divided into three general categories:
analog, smart analog, and digital.

Analog:
 Output signal: 4 - 20 mA
 Transmission distance: 1200 m (typical)
 Data for one variable is transmitted
 Good accuracy
 Easy maintenance

Basic diagram of an electronic instrument (XMTR)


Smart Analog:
 Characterization of the sensor as measuring temperature, static
pressure, etc.
 Excellent accuracy
 Self-diagnosis (i.e., the sensor can analyze problems in its own
functionality)
 One variable

Digital:
 Multiple instruments can use a single cable
 Transmission of multiple values for each instrument (process variables,
calibration, diagnostics, range)
 Distance: approximately 1900 m without a repeater
 Data capacity is influenced by the mode of transmission (cable, fiber
optic, wireless)

Digital transmitters

General Concepts
Range: The region between the limits within which a variable is measured.
It indicates the minimum and maximum values that limit the region. The
range is expressed with two numbers, e.g., 10 to 20°C, 10 to 150 V, 0 to
100%
Span: Calculated as the maximum value of the range minus the minimum
value of the range. Span is expressed with a single number in process units,
e.g., 120°C, 30 V, 150 liters per second. 50/150 * 100/1 = 33.3%
Elevation: If the lower limit of the range is a positive value, this lower
limit is the elevation. Example: If the range is 50°C to 200°C, we can say that
the elevation is 50°C or 33.3% of the span.
Depression (also referred to as suppression): If the lower limit of the
range is negative, the absolute value of this lower limit is the depression.
Example: If the range is -10 °C to 80 °C, we can say that the depression is 10
°C or 11.1% of the span. 10/90 * 100/1 = 11.1%
Over range: When a device is calibrated to operate within a certain range
but may be subjected to values above or below that range, then it requires a
protection mechanism to prevent damage to the instrument or to prevent the
indicator from exceeding its upper or lower limit. When the measured values
are above the maximum value, we have positive over range. When the
measured values are below the minimum value, we have negative over
range.

Examples of range, span, elevation, and depression

Error: The difference between the measured value and the actual (or
expected, or desired) value of a physical variable. The error can be positive or
negative. When the measured value is greater than the actual value, the error
is positive. When the measured value is less than the actual value, the error is
negative.
If measured > actual, error > 0
If measured < actual, error < 0
The error can be expressed
 in engineering units (e.g., °C, psi)
 as a percentage of the span (e.g., +/- 3% of the span)
 as a percentage of the measurement (e.g., +/- 5% of the measurement)

Reference value: In a general sense, this refers to the actual, expected,


or desired value of a variable. In the context of a feedback control system, the
measured value is fed back and subracted from the reference value in order
to generate the error signal.

Accuracy: A number that defines the limits of the error. When we say that
an instrument has an accuracy of 0.1% of the span, this means that anywhere
within the range, the readings do not differ from the actual value by more
than 0.1% of the span.

An Example
For a better understanding of the concepts expressed above, consider the
following example.
We have an oil tank where we are required to continuously measure the
temperature. The operating conditions for this process are as follows:
 Minimum temperature: -10 °C
 Maximum temperature: 90 °C
 The measurement accuracy must be 1% of the span or better
 The temperature measurement must be displayed locally and remotely
Our example system

First, we must select a measuring instrument that allows us to measure


the temperature of the liquid in the tank. Since the information should be
available locally and remotely, we will choose a temperature transmitter.
This transmitter must have the following characteristics:
 Range: -10 °C to 90 °C
 Span: 90 °C - (-10 °C) = 100 °C
 Depression: 10 °C or 10% of the span
 Accuracy: 1% of the span = 1% × 100 °C = 1 °C
 This accuracy of 1% ensures that, in each measurement or
temperature reading, variation or errors will not exceed +/- 1 °C
On an additional note, we must ensure a proper relationship between the
range and the standardized transmitter output. To calibrate the instrument,
we must associate the minimum value of the range (-10 °C) with the
minimum value of the output (4 mA) and the maximum value of the range
(90 °C) with the maximum value of the output (20 mA).

The overall performance of an instrument is based on its static and


dynamic characteristics. It indicates how well the instrument measures the
desired input and rejects the spurious (or undesired) inputs.

STATIC CHARACTERISTICS
Some of the static characteristics of instruments are accuracy, sensitivity,
reproducibility, precision, precision error, drift, static error, dead zone/bard,
hysteresis, resolution, linearity, stability, threshold, readability, tolerance and
range or span.
Accuracy
 Accuracy is defined as the degree of closeness at which the instrument
reading approaches the true value of the quantity to be measured.
 Due to the effects of temperature and humidity the measured quantity
varies from the true value
 Accuracy is expressed in the “Percentage of Full-Scale Reading”, for
instruments having a uniform scale.
 Specifying accuracy in terms of the percentage is better for the
quantity being measured.

Sensitivity
 In steady-state conditions, Sensitivity is defined as the ratio of a change
in output to a change in input.
 For a given instrument, sensitivity can be derived as the smallest
change in the measured variable
 Sensitivity describes the maximum change in an input signal that will
not initiate on the output.
 Note: The sensitivity of the instrument should be high.
Thus, sensitivity is expressed as: infinitesimal change in output/
infinitesimal change in input

Repeatability
This is the degree of closeness with which a given value may be
repeatedly measured. It is the closeness of output readings when the
same input is applied repetitively over a short period of time. The
measurement is made on the same instrument, at the same location,
by the same observer and under the same measurement conditions. It
may be specified in terms of units for a given period of time.

Reproducibility
This relates to the closeness of output readings for the same input when
there are changes in the method of measurement, observer, measuring
instrument location, conditions of use and time of measurement. Perfect
reproducibility means that the instrument has no drift. Drift means that with a
given input the measured values vary with time.
Reproducibility and Repeatability are a measure of closeness with which a
given input may be measured over and over again. The two terms may cause
confusion. Therefore, note the distinction between the two terms:
Reproducibility is specified in terms of scale readings over a given period of
time. On the other hand, Repeatability is defined as the variation of scale
reading and is random in nature.

Precision
 Precision is the degree of exactness of the designed instrument.
 Precision is composed of two characteristics such as conformity and
significant figures.
 With more significant figures, estimated precision is more.
 For example, consider two resistors of 1792 ohms and 1710 ohms; the
continuous repeated measurement indicates 1.7 K ohms so an operator
is unable to notify the true value from the scale.

Precision Error
 The precision error generated by the limitation of a measuring
instrument.
 Since an operator evaluates a consistent reading of 1.7 K ohms, which
is close to the true scale though there is no deviation from the observed
value.
 The above example indicates that conformity is required because of the
lack of significant figures obtained.

Drift
 Drift is defined as an unexpected change in the output of a measured
variable over some time unrelated to a change in output operating
conditions.
 Drift is caused by environmental factors such as mechanical vibrations,
temperature variation, stray electric fields, stray magnetic fields, and
thermal EMFs.
 A drift in instrument calibration occurs due to the aging of parts.
 Drift in flow measurement occurs due to wear and tear of primary
sensing elements such as orifice plates.
 A drift in temperature measurement occurs due to scale formation on
the thermowell.
 Drift in Thermocouples or RTD occurs due to the change of metallic
properties of elements.
 For a measuring device, drift can be systematic or random, or
sometimes both.
 Flow drift occurs systematic way because of wear and tear in the edge
of an orifice plate
Drift is further classified as:
1. Zero Drift: is defined as the deviation in the measured variable
starting right from zero in the output with time.
2. Span Drift: is defined as a proportionate change in indication along the
upward scale, this span drift is also known as sensitivity drift.
3. Zonal Drift: is defined as the drift that occurs at a certain portion of
the span of an instrument.
Static Error
 Static Error is the variation between the true values of a measurable
quantity to the values indicated by the measuring instrument which are
not affected by operating conditions.
 Static error = True value of a measured quantity – Indicated Value.
 If the Static error is +ve, it means the instrument reads a high value.
 If the Static error is -ve, it means the instrument reads a low value.

Dead Zone/Band
 For the largest range of values of a measured variable, to which the
instrument does not respond.
 The dead zone occurs in indicating an instrument due to static friction.
 Due to this static friction a control valve doesn’t open for large signals
from the controller.
Hysteresis
 Hysteresis is a phenomenon that defines various effects of output
during loading and unloading.
 Generally, an instrument may indicate one set of output values for
increasing input values of an instrument,
 It may indicate a different set of output values for the decreasing input
values of an instrument.
 The maximum variation is observed at 50% of the full scale for
increasing and decreasing inputs.
Resolution
 Resolution is the smallest quantity that can be detected with certainty
by an instrument being measured.
 If a non-zero input quantity is raised slowly, the output will not rise until
some minimum changes in the input are done. This minimum change
causes the change in output to be termed resolution.

Linearity
The linearity is defined as the ability to give the input characteristics
symmetrically and linearly (Straight line). In other words, the ability to
measure maximum deviation from the ideal linear line. Instruments are
said to be linear when an increment in input and output are constant over
the specified range.

Stability
It is the ability of an instrument to retain its performance throughout a
specified operating life.

Threshold
If the instrument input is increased very gradually from zero there will be
some minimum value below which no output change can be detected. This
minimum value defines the threshold of the instrument.
Readability
This indicates the closeness with which the scale of an analog type of
instrument can be read. The readability of an instrument depends
upon following factors:
i) Number of graduations
ii) Spacing between the graduations
iii) Size of the pointer
iv) Discriminating power of the observer
The readability is actually the number of significant figures in the
instrument scale. The higher the number of significant figures, the
better would be the readability.
Tolerance
The maximum allowable error in the measurement is specified in terms of
some value which is called tolerance.

Range or span
The minimum and maximum values of a quantity for which an instrument
is designed to measure is called its range or span.

Dynamic Characteristics of an Instrument


Dynamic characteristics of a measuring instrument describe its behavior
between the time a measured quantity changes value and the time when the
instrument output attains a steady value in response. As with static
characteristics, any values for dynamic characteristics quoted in instrument
data sheets only apply when the instrument is used under specified
environmental conditions. Outside these calibration conditions, some
variation in the dynamic parameters can be expected. The set of criteria
which defines how rapidly response of an instrument or characteristics
changes with time is called dynamic characteristics of an Instrument.
Some of the dynamic characteristics of instruments are explained below:
Dynamic Performance
This is a measure of how well a system responds to a changing input. The
dynamic specification can be defined by applying one of or more standard
input signals and then examining the resultant output. Various standard test
inputs are as follows:
 Step Signal
 Ramp Signal
 Sine Wave signal
 Parabolic Signal
 Impulse input

To understand the instrument/ system response, we can examine these test


signals response under two analysis methods as follows:
 Time Domain Analysis
 Frequency Domain Analysis
If the output of control system for an input varies with respect to time, then it
is called the time response of the control system whereas the frequency
response performance refers to the performance of the system subject to
sinusoidal input of varying frequency. Here Sine wave signal is used for
frequency domain analysis where except sine wave; all are used for time
domain analysis.

Response time
Response Time is defined as the time required by instrument or system to
settle to its final steady position after the application of the input. So we can
say response time determines required time to produce output when input is
applied to the instrument.

Speed of response
Speed of Response is defined as the rapidity with which an instrument or
measurement system responds to changes in measured quantity. So we can
say that speed of response determines speed at which the instrument
responds whenever there is any change in the quantity to be measured is
called speed of response. It indicates how fast the instrument is.
Measuring Lag
Measuring lag is defined as the delay in the response of an instrument to a
change in the measured quantity/ input signal, since an instrument does not
react to a change in input immediately. In the high speed measurement
systems, as in dynamic measurements, it becomes essential that the time lag
be reduced to minimum.
Measuring lag is of two types-

 Retardation type
 Time delay
In Retardation type of measuring lag, the response begins immediately after
a change in measured quantity has occurred, whereas in time delay type of
measuring lag, the response of the measurement system begins after a dead
zone following the application of the input.

Fidelity
Fidelity of a system is defined as the ability of the system to
reproduce the output in the same form as the input. It is the
degree to which a measurement system indicates changes in the
measured quantity without any dynamic error. Supposing if a
linearly varying quantity is applied to a system and if the output
is also a linearly varying quantity the system is said to have 100
percent fidelity. Ideally a system should have 100 percent fidelity
and the output should appear in the same form as that of input
and there is no distortion produced in the signal by the system. In
the definition of fidelity any time lag or phase difference between
output and input is not included.
“It is defined as the degree to which a measuring instrument is
capable of faithfully reproducing the changes in input, without
any dynamic error.”
Dynamic Error
The dynamic error is the difference between the true value of the quantity
changing with time and the value indicated by the instrument if no static
error is assumed. However, the total dynamic error of the instrument is the
combination of its fidelity and the time lag or phase difference between input
and output of the system.
Thus, in a nutshell, signal response is defined as the output response of an
instrument when an input test signal is applied to it. There are two type of
response

 Static Response
 Dynamic Response
When an input is applied to an instrument or a measurement system, the
instrument or the system cannot take up immediately its final steady state
position. It goes through a transient state and then after steady state. The
transient state response of instrument is called as dynamic response of
instrument, whereas steady state analysis determines static response. A
figure showing static & dynamic part of response as transient & steady state
response is given as follows-

Here, both the transient and the steady states are indicated in the figure. The
responses corresponding to these states are known as transient and steady
state responses.

Mathematically, we can write the time response c(t) as-


Criteria for Selecting Instruments for Measurement
One of the tasks at planning of quality inspection is selection of measuring
instruments.
The measuring instruments are the most important part of the measuring
process so their selection has to be done carefully. The selection of measuring
instruments is a complex task, which depends on the size, the character and
the value of measured magnitude.
The selection of measuring instruments for linear measurements, takes the
following main factors into account: manufacturing program, the construction
features of the details and manufacturing accuracy – the tolerance zone,
measuring instrument error and the measuring costs.
In the single production companies the special measurement instruments
are inapplicable, so it is recommended the dimensions control of
manufacturing products to be made using universal measuring equipment
(calipers, micrometers, indicating internal gauges i.e.). In the serial
production
the main measurement testing and control instruments are limit gauges,
measurement templates and semiautomatic measurement instruments.
The selection of measurement instruments involves the set of
metrological, exploitation and economical indices. The metrological indices
are: scale interval, measurement method, accuracy, measurement range
(interval). The exploitation and the economic indices are the cost and the
reliability of measurement
instruments, running time before repair is needed, inspection intervals, easy
to use, inspection and repair costs including the measurement instrument
delivery costs to the place for inspection and back.
Usually, there is required information for the preliminary selection of
measurement instruments. The purpose of preliminary selection of
measurement instruments is to reduce the possible solutions when selecting
the proper measuring equipment. For the preliminary selection of
measurement instruments the main criteria are taken into account, which
include organization and technical criteria. These criteria may be arranged by
priority in the selection of measuring instruments.
Main Criteria are:
 Given measurement task
 Measured quantity
 Measured range of the parts
 The dimensions and tolerances
 Available time for test

Minor Criteria:
 In what form the measured values have to be
 How the values have to be processed
 How the measuring equipment have to be used
 Who should operate the measuring equipment

Advanced Minor Criteria:


 Type/construction
 Environmental conditions
 Sensors
 Control
 Software
 Consulting and services

In summary, the factors for selection of measurement


instruments are:
Metrological task:
- Measurement object
- Inspection characteristic
- Inspection scope
- Results documentation

Additional Conditions:
- Measurement place
- Environmental influences
- Measuring time (cycle time)
- Level of automation

Output data
- Standards
- Legislation
- Regulations
- Guidelines
- Safety requirement
- Customers requirement
- Internal instructions

Organization
- Existing Measurement Instruments
- Metrological infrastructure

Costs
- Labour cost
- Staff cost
- Training cost
- Operational preparation
- Cost for monitoring of inspection equipment

INSTRUMENTS
The Working Principle, Types, and Applications of a
Manometer

A manometer is a popular device utilized for estimating the fluid pressure


about an exterior origin which is normally evaluated to be the globe’s
environment.
An easy manometer can be assembled by partly restoring an apparent plastic
duct with a coloured fluid to enable the fluid level to be handily examined.

The duct is again intended into a U-shape and repaired in a vertical situation.
The categories of the liquid in the two upright sections should be comparable
at this level, as they are presently perceived to be the same difficulty.

This category is accordingly substantial and identified as the zero level of the
manometer. The instrument is positioned against the measuring plate to
enable any disparity in the length of the two sections.

This length differential can exist utilized rapidly to give rise to the comparable
comparison between various experiment stresses. This kind of manometer
can furthermore be employed to evaluate the absolute strength when the
consistency of the liquid in the manometer is remembered.
A manometer
How does the Manometer work?

The working of a manometer is pursued as; the meter in the instrument


includes a metal shaft or cylinder. During the measurement of a gas or fluid,
the flexible cylinder of the gauge is clutched.

Thereafter, the shaft is distorted, which is restored to the instrument,


enabling you to examine the outcome.

This instrument contains a U formed duct or tube in which the fluid is


compressed. This is utilized to assess the pressure which is unspecified by the
balancing gravity force and momentum due to gravity.

The scales are created on the tube in phrases of mm.

The principle on which a manometer works with the vapour or fluid pressure
estimate is incredibly easy.

Hydrostatic stability in the instrument indicates that the stress when fluid is
at rest is comparable at any level.

For Illustration, if both the ends of the U-tube live left open to the
environment then the strength on each aspect will be identical.

Advantages of Manometer:
 It is economical and adequate for low-pressure entreaty.
 It has a simple construction, decent perceptivity, nice precision and
reasonable procedure and formation.
 The manometers do not have to be calibrated against any criterion; the stress
variation can be evaluated from the early principles.
 The manometer is accessible for a huge range of restoring liquids of
fluctuating certain gravity.
Disadvantages of Manometer:
 Manometers are vast and thick
 It needs grounding
 No fixed quotation is accessible
 In the manometer, the omission is submitted due to moisture
 It has no over a latitude preservation
 It has a miserable energetic response

Different Types of Manometer and their uses:


There are mainly three different types of manometers and those are:
 Simple manometer
 Differential manometer and
 Micromanometer
1. Simple Manometer:
A simple manometer contains a pipe configuration where one edge of the
duct is attached to the level in the liquid in which the pressure is to be
assumed and the further edge is maintained open to the environment.
A Simple Manometer
Generally, it contains a glass tube or pipe having one edge connected to a
point and the other end continuing open. These types of manometers can be
used to differentiate the cadenced pressure or void pressure.

Different Types of Simple Manometer are:


 U-tube (For Gauge and Vacuum Pressure)
 Piezometer
 Sensitive or Inclined tube and
 Single Column Manometer

U-Tube Manometer:

The U-tube manometer is the incentive for the pressure measurement


equipment. Its name comes from the U-shaped construct when the two edges
of an adjustable duct full of fluid are put forward to protect the fluid from
attaining out of the edges.
The U-tube manometer occurs as a ‘liquid’ equilibrium manometer. The
height variation is estimated on graduated plating.

The duct carries the metal mercury or any different liquid or fluid whose
particular gravity is extensively elevated than the certain sincerity of the fluid
whose strength is to be estimated.

For gauge pressure: In the gauge pressure measurement the U-tube


manometer is used to equalize the weight of the fluid on one side of the u-
tube against the strength instructed to the other side of the tube.
The distinction in the elevation of the fluid exemplifies the pressure propelling
the fluid down to one side of the tube and up the other side.

For vacuum pressure: In the vacuum pressure when a vacuum is


connected to one side of the U-tube, the watery increases in that side of the
tube and plummets on the further side. The length variation which
extensively amounts to the readings above and below zero level reflects the
quantity of vacuum. The devices which operate these principles are called
manometers.

U-Tube Manometer Advatages:


 The strategy of using U – Tube manometer is very easy.
 Pressure measurement with U – Tube manometer is relatively economical.
 U – Tube manometer will deliver the detailed pressure examination.
 U – Tube manometer is very understandable in construction.

U-Tube Manometer Disadvantages:


 The fluid will be uncovered to the environment in a u-tube manometer and
accordingly, the fluid must be neat and non-toxic.
 In different when head will be minor as it will be extremely tough to calculate
the pressure if elevation will be small.
Piezometer
The piezometers are devices that are used to measure the fluid pressure in a
system by measuring the elevation to which a column of the fluid increases
against earnest and it is also called geotechnical detectors and It is used for
the measurement of pressure of the pore water through piezometric levels on
the surface.
The piezometer is designed to measure the pressure of pore water in the
ground, establishments, solid structures and gravel fill.

Piezometer works on the principle of conversion of liquid pressure to a


regularity signal via a diaphragm and a stressed steel cord.

When a magnetic coil is used for excitation, the steel cord vibrates at its
normal frequency. A piezometer is used in foundations, monitoring of soil
barriers, strength and examination of soil and formation supervision.

Sensitive manometer or inclined tube manometer


An inclined tube or a sensitive manometer is a reasonable and economical
device. The Inclined tube manometer is normally utilized for calculating
differential pressure in a hollow. It corresponds to a U-tube Manometer.

This device comprises two intercommunicating crates restored with filtered


liquid.
These types of manometers give better readability by lengthening an upright
differential with a likely to demonstrate column, providing additional
graduations per unit of upright length and improving the instrument’s
perceptivity and precision.
The sensitive manometer is useful for conserving the most accurate pressure
points for industries’ steam requisitions.
The working principle of the manometer comprises the measurement of the
pressure the one tube of the Inclined tube manometer features into a puddle
and the other one of the manometer is inclined as per the requirements of
angle.

Single Column Manometer


The single-column manometer is a modified version of a U-tube manometer in
which one side of the manometer is a big waterhole and the other side of the
manometer is a minor duct or tube which is upright to the environment.
The Single column manometer immediately provides the strength by
assessing the length in the other component and expected to the huge cross-
sectional region of the waterhole, for any deviation in pressure, the alteration
can be excluded.
Normally there are two kinds of single column manometer based on the
component of the manometer, a Vertical single column manometer and an
Inclined single column manometer.

2. Differential Manometer
The differential manometer is equipment that is used to compare the
pressure instead of measuring the pressure. It is mostly used to measure the
pressure difference between the two points or two tubes.

It is also used to inspect the leaks in the pipes as the leakage leads to
pressure unevenness hence unbalancing of the manometric liquid.
The water level in a compartment is also gauged by this manometer and it is
also employed as a fluid level indicator in boiler equipment.
The differential manometers have a simple construction and they are cost-
effective and easy to maintain. Differential manometers can be easily
replaced and have little or no operating cost.
Differential manometers are categorized into two kinds:
 U-tube differential and
 Inverted U-tube differential.
U-tube differential manometer:
The U-tube differential manometer encloses glass duct intent into a U shape.
Both the ends of the U-tube in the manometer are attached to the levels
whose pressure is to be measured.

In the U-tube, a type of fluid known as manometric fluid is injected. The


manometric fluid has higher specific gravity than the fluids present in the
pipe.

Primarily, mercury is used as a manometric fluid as it has a high specific


gravity, does not stick to the glass and is totally visible. It can also be used at
a large span of temperature.

Inverted U-tube differential manometer:


The inverted U-tube manometer is also used for measuring the differences in
the pressure of the fluids.

The vacuum above the fluid in the manometer is restored with air which can
be conceded or evicted through the faucet on the top; this is to modify the
level of the fluid in the tube of the manometer.
Mercury is widely used as a manometric fluid because it has some integrity
under typical circumstances like permanent viscosity.
Micromanometer
Generally, it is an apparatus used to demonstrate and measure the pressure
or a micromanometer is equipment that computes air pressure utilizing a
container with a “U”-shaped duct that is open at one or both ends
The elevation of the liquid on the open side of the U-tube will be bigger on
that viewpoint when air pressure is smaller than the vapour pressure and
deeper on the upright side when the air pressure surpasses the vapour
pressure.

Micromanometers calculates the total, static and velocity pressures, as well


as pressure, lowers across the diffusers, fans, filters and coils.

Digital Manometer
A digital manometer uses a microprocessor and pressure transducer to sense
slight changes in pressure. It gives the pressure readout on a digital screen. It
measures differential pressure across two inputs. An analog/digital output in
proportion to the instantaneous pressure can be obtained.

Digital manometers report positive, negative, or differential measurements


between pressures. With the integration of an anemometer, flow readings can
also be recorded on a digital manometer.

Manometer Accuracy
Current standards for accuracy require that manometers be within +/- 3 mm
Hg (mm of mercury) of the reference or within +/- 3 mm Hg or 2% of the
reading (whichever is greater) for extended temperature ranges.
Accuracy in Liquid Manometers
1. U-tube type: +/- ½ of minor scale graduation

2. Well type: +/- ½ of minor scale graduation

3. Inclined type: +/- ½ of minor scale graduation

Accuracy in Digital Manometers


1. General purpose: +/- 0.025 – 0.1% F.S.

2. Calibrating: +/- 0.025 – 0.1% F.S.

Manometer Applications
 Used in the maintenance of heating, ventilation, and air conditioning
(HVAC) systems, low pressure pneumatic or gas systems.
 Construction of bridges, installing swimming pools and other engineering
applications.
 Climate forecasting.
 Clinical applications like measuring blood pressure and in physiotherapy.
 Piezometers are used to measure the pressure in pipes where the liquid is
in motion.
A manometer is one of the earliest and simplest devices used for
measurement of gauge pressure and differential pressures. As mentioned in
this discourse, it has myriad uses in different fields.

The Principles and Applications of the Bourdon Tube Gauge

Bourdon Tubes are known for its very high range of differential pressure
measurement in the range of almost 100,000 psi (700 MPa). It is an elastic
type pressure transducer.
The device was invented by Eugene Bourdon in the year 1849. The basic idea
behind the device is that, cross-sectional tubing when deformed in any way
will tend to regain its circular form under the action of pressure. The bourdon
pressure gauges used today have a slight elliptical cross-section and the tube
is generally bent into a C-shape or arc length of about 27 degrees. The
detailed diagram of the bourdon tube is shown above.
As seen in the figure, the pressure input is given to a socket which is soldered
to the tube at the base. The other end or free end of the device is sealed by a
tip. This tip is connected to a segmental lever through an adjustable length
link. The lever length may also be adjustable. The segmental lever is suitably
pivoted and the spindle holds the pointer as shown in the figure. A hair spring
is sometimes used to fasten the spindle of the frame of the instrument to
provide necessary tension for proper meshing of the gear teeth and thereby
freeing the system from the backlash. Any error due to friction in the spindle
bearings is known as lost motion. The mechanical construction has to be
highly accurate in the case of a Bourdon Tube Gauge. If we consider a cross-
section of the tube, its outer edge will have a larger surface than the inner
portion. The tube walls will have a thickness between 0.01 and 0.05 inches.

Working
As the fluid pressure enters the bourdon tube, it tries to be reformed and
because of a free tip available, this action causes the tip to travel in free
space and the tube unwinds. The simultaneous actions of bending and
tension due to the internal pressure make a non-linear movement of the free
tip. This travel is suitable guided and amplified for the measurement of the
internal pressure. But the main requirement of the device is that whenever
the same pressure is applied, the movement of the tip should be the same
and on withdrawal of the pressure the tip should return to the initial point.
A lot of compound stresses originate in the tube as soon as the pressure is
applied. This makes the travel of the tip to be non-linear in nature. If the tip
travel is considerably small, the stresses can be considered to produce a
linear motion that is parallel to the axis of the link. The small linear tip
movement is matched with a rotational pointer movement. This is known as
multiplication, which can be adjusted by adjusting the length of the lever. For
the same amount of tip travel, a shorter lever gives larger rotation. The
approximately linear motion of the tip when converted to a circular motion
with the link-lever and pinion attachment, a one-to-one correspondence
between them may not occur and distortion results. This is known as
angularity which can be minimized by adjusting the length of the link.
Other than C-type, bourdon gauges can also be constructed in the form of a
helix or a spiral. The types are varied for specific uses and space
accommodations, for better linearity and larger sensitivity. For thorough
repeatability, the bourdon tubes materials must have good elastic or spring
characteristics. The surrounding in which the process is carried out is also
important as corrosive atmosphere or fluid would require a material which is
corrosion proof. The commonly used materials are phosphor-bronze, silicon-
bronze, beryllium-copper, inconel, and other C-Cr-Ni-Mo alloys, and so on.
In the case of forming processes, empirical relations are known to choose the
tube size, shape and thickness and the radius of the C-tube. Because of the
internal pressure, the near elliptic or rather the flattened section of the tube
tries to expand as shown by the dotted line in the figure below (a). The same
expansion lengthwise is shown in figure (b). The arrangement of the tube,
however forces an expansion on the outer surface and a compression on the
inner surface, thus allowing the tube to unwind. This is shown in figure (c).
Expansion of Bourdon Tube Due to Internal Pressure: Like all elastic elements
a bourdon tube also has some hysteresis in a given pressure cycle. By proper
choice of material and its heat treatment, this may be kept to within 0.1 and
0.5 percent of the maximum pressure cycle. Sensitivity of the tip movement
of a bourdon element without restraint can be as high as 0.01 percent of full
range pressure reducing to 0.1 percent with restraint at the central pivot.
Types of Bourdon Tube
There are 3 main types of elastic elements for pressure measurement,
namely
 Bourdon Tubes,
 Bellows, and
 Diaphragm.
C-Type Bourdon Tube
This instrument is by far the most common device used to indicate gauge
pressure throughout the oil gas industry.
A bourdon tube obey Hookes Law, that is within elastic limits. Its free end will
experience a movement that is proportional to the fluid pressure applied. The
measuring element named for bourdon is partially flattened metal tube
formed in a 250° Arc. The tube is sealed at one end (the tip) and connected
to the pressure at the other end (socket).
Any pressure inside the tube exceeding the pressure on the outside causes
the tube to become more circular in cross section. As a result, the tip moves
in an arc. This movement is connected through a level, quadrant and pinion
to a pointer which moves round a scale to indicate the pressure.
The amount of movement of the free end of the tube is directly proportional
to the pressure applied (provided the tube elastic limit is not exceeded).
Where greater sensitivity is required, the bourdon tube may be constructed in
the form of a Spiral or Helix.
Spiral Bourdon Tube
Spiral Bourdon Tube is made by winding a partially flattened metal tube into
a spiral having several turns instead of a single C-bend arc.
The tip movement of the spiral equals the sum of the tip movements of all its
individual C-bend arcs.
Therefore it produces a greater tip movement with a C-bend bourdon tube. It
is mainly used in low- pressure application. Spiral bourdon tube is shown in
figure.
Helical Bourdon Tube
Helical is a bourdon tube wound in the form of helix. It allows the tip
movement to be converted to a circular motion.
By installing a central shaft inside the helix along its axis and connecting it to
the tip, the tip movement become a circular motion of the shaft.
Selection Criteria

When choosing a bourdon tube pressure gauge, users need to pay attention
to your current and future system requirements. Some of the factors to
consider include:

Gauge material and diameter


Common gauge materials include stainless steel, copper alloys and
aluminum. The right material will depend on the chemical properties of the
fluid media. When working with corrosive fluids, it is advisable to use
stainless steel or aluminum. However, aluminum is not suitable for high-
temperature applications. Copper, on the other hand, is not ideal for high-
temperature applications and does not handle highly corrosive fluids.
The dial size of bourdon tube pressure gauges ranges from 1 to 16,
representing a 25 millimeter (mm) to 406 mm range. That said, users should
consider the readability and space limit requirements when choosing a dial
size.

Temperature and pressure range


The system’s maximum and minimum operating pressure will determine the
right pressure gauge to choose. Always ensure the bourdon tube is not
subjected to excess stress. For optimal operations,
the maximum working pressure should not exceed 75% of the gauge’s full-
scale range (for steady pressure) and 65% of the maximum scale value for
pulsating pressure.
For dry or silicon-filed pressure gauges, the normal temperature ranges are (-
40 F to +140 F) or (-40 C to +60 C). That said, oil or glycerin-filled pressure
gauges handle pressures within the range of (-20 C to
+60 C) or (-4 F to +140 F). From the two types, glycerin-filled gauges are best
used with warmer fluids. They are also suitable for measuring pulsating
pressure.

Vibration resistivity and corrosion resistance


In applications or environments with little to no vibration, a dry gauge works
fine. However, when the application or system involves some vibration, it is
best to choose a wet gauge since the dry gauge will distort measurement
readings. The liquid in the pressure gauge is designed to absorb any vibration
and pulsation, allowing for easy and accurate readings. If the pressure
pulsation or spikes still interferes with the readings, users can use a pressure
gauge snubber to suppress the unwanted force. With highly corrosive fluids,
liquid-filled stainless steel gauge pressures are recommended. The glycerin or
oil inside the gauge helps lubricate and protect against corrosion of the
deflecting pointer.
Other aspects to consider
Besides the factors identified above, there are other small and seemingly
insignificant aspects to keep in mind. These include the mounting options and
the pressure units. Ideally, bourdon pressure gauges are available with panel
mounting or vertical/ horizontal threaded connections. The two common
pressure units are bar and psi. Always check on these factors before buying a
bourdon pressure gauge

Advantages of the Spiral and Helical Tubes over the C-Type Bourdon
Tube
1. Both the spiral and helical tubes are more sensitive than the C-Type
tube. This means that for a given applied pressure a spiral or helical
tube will show more movement than an equivalent C-Type tube, thus
avoiding the need for a magnifying linkage.
2. Spiral and helical tubes can be manufactured in very much smaller
sizes than the equivalent C-Type tubes. Hence, they can be fitted into
smaller spaces, such as inside recorders or controller cases where a C-
Type would be unsuitable because of the size.
Application of Bourdon Tube Elements
Before using a Bourdon tube on a particular process application, a
number of questions need to be considered. We need only to consider
two here.
1. What is the maximum operating pressure likely to be encountered by
the tube? Manufacturers recommend that the normal operating
pressure should not exceed 60% of the maximum scale reading. For
example, if the normal working pressure were 6 bar, we would select a
bourdon tube instrument ( pressure gauge) having full-scale deflection
of 10 bar.
2. Is the process fluid corrosive or non-corrosive? Material for the bourdon
tubes must be able to handle the process fluid. Therefore, selection of
pressure gauge must take into account the corrosivity of the line fluid.

As one of the most common pressure gauges in the market, the bourdon tube
type finds application in different industries. Some of the common use cases
include:
 Manufacturing and processing: They are common in processing plants
where pressure measurements of the various fluids are critical for
optimal system operation.
 Agriculture: Bourdon tubes are used in agricultural sprayers to keep the
pressure at the desired range.
 Water supply systems: These pressure gauges are used to maintain
optimal working pressures to ensure water is delivered to the desired
distances at the right pressure.
 Other use cases are in the automotive, marine and aerospace industries
and heating, ventilation and air conditioning (HVAC) systems. Some
advantages offered by bourdon tubes include:
 high accuracy when used in low and high-pressure applications
 enhanced safety when used in high-pressure systems
 a compact and durable design
 relatively low cost for its premium features
 supports dynamic pressure loads and works well under heavy vibration

Bourdon tubes are popular pressure gauges used in different industries


thanks to their flexibility, high accuracy and sensitivity. When choosing a
bourdon tube pressure gauge, always prioritize the system requirements and
pay attention to the factors discussed in this article.
Users will also want to choose bourdon tube gauges from reputable
manufacturers. This minimizes the risk of picking a low-quality product.
Where possible, consider seeking professional guidance from an expert and
trusted third-party professional during the selection process.

PRINCIPLES AND APPLICATIONS OF DIAPHRAGM PRESSURE GAUGE


The gadgets used to measure air pressure are known as pressure gauges. The
pressure in the pressure gauge is compared to atmospheric pressure. For
pressures greater than atmospheric pressure, gauge pressure is positive.
When pressure is lower than atmospheric pressure, gauge pressure is
negative. The majority of standard pressure gauges are constructed with
either a commercial or industrial function in mind when considering usability
and purpose.
A device termed a “diaphragm pressure gauge,” also referred to as a
“membrane pressure gauge,” uses the deflection of a thin, flexible membrane
to measure the pressure of the fluid in a system. The membrane separates
the pressure gauge’s internal workings from the media, preventing
contamination. The diaphragm pressure gauge can be used with corrosive or
polluted liquid or gaseous media because of this feature.
Construction and Working Principle of Diaphragm Pressure Gauge

Preferably, the diaphragm can measure gauge, absolute, and differential


pressures. The fundamental component of the diaphragm pressure gauge is a
circular, corrugated diaphragm that is clamped or welded between two
flanges.
Stainless steel or Inconel, two robust sheets of steel, are typically used to
make the diaphragm. A connection or other mechanism transmits the
pressure element’s deflection. The pointer receives an amplified version of
the element’s slight deflection thanks to this process.
The lower side of the diaphragm receives the process pressure, whereas the
upper side is exposed to air pressure. The diaphragm is lifted or lowered by
the differential pressure across it, which also moves the pointer.

Types of Diaphragm Pressure Gauge


There are mainly two types of diaphragm pressure gauge, which are
explained below:
Metallic Diaphragm Pressure Gauge
The diaphragm pressure gauge is built with a thin and flexible diaphragm
composed of brass or bronze. The movement of the diaphragm regulates the
operation of an indicator or recorder. Due to its versatility to function in
various orientations and its portability, this type of gauge is exceptionally
suitable for installation or use in mobile machinery such as aircraft.
Slack Diaphragm Pressure Gauge
Measuring pressure below atmospheric pressure is difficult due to the minor
variations involved. The distinction between air pressure and a complete
vacuum is only 14.7 psi (1 kg/cm2kg/cm2), making pressure measurement in
this range more challenging. To measure overpressure in the range of 0.01-
0.40 mm Hg (torr), a diaphragm gauge with a large surface area and a weak
spring can be employed. It achieves an accuracy of 1-2%.

Advantages and Disadvantages of Diaphragm Pressure Gauge


The advantages and disadvantages of diaphragm pressure gauge are as follows:
Advantages
o They have the benefit of being very sensitive.
o It can recognize fractional pressure changes in the microsecond time frame.
o Just a tiny quantity of space is needed.
o With this kind of pressure measurement equipment, low pressure, vacuum, and
differential pressure can all be monitored.
o Its use in corrosive settings is appropriate.
Disadvantages
o These are challenging to fix.
o These are not appropriate for measuring very high pressures.
o They require defense against vibration and stress.

Principles and Applications of the Bellow type Gauge


Bellows type Pressure Sensing elements are for low to intermediate pressure.
They are used in vacuum, absolute and differential pressure applications. The
bellows expands or contracts based on the pressure difference across the
inside and outside of the bellows unit.

The bellows are sealed on the free end; Pressure is applied in the open port of
the fixed end. Pressure applied to the inside acts on the inside surface,
producing a force that causes the bellows to expanding length, thus
producing motion on the free end. Because of the comparatively large surface
area in comparison with physical dimension, the movement-to-pressure ratio
is bigger than for the devices beforehand mentioned. Bellows are
extra sensitive, very accurate, and used primarily for lower-pressure functions
present in receiver devices.
Basic designs of bellows pressure sensor will be described. They are classified
by the reference pressure used as absolute, gauge or differential pressure
detectors.

Absolute gauge (Using bellows)


When absolute pressure is to be sensed with bellows elements it normally
involves two bellows. Out of two bellows one for measuring and the other for
reference bellow element. The reference bellow element is fully evacuated
and sealed while the sensing element is connected to the process. The
pressure of the media to be measured is compared against a reference
pressure which is equal to absolute zero in this arrangement an increase in
process pressure will cause the measuring bellows to extend which results in
an increase of read out through the motion balance mechanism. If the
process pressure is constant but the barometric pressure changes, force will
be exerted equally on the outside of both bellows causing no change in the
read out. The evacuated bellows are capable of compensating for barometric
pressure variation as high as 100 mm of mercury.
When bellows are used as the pressure sensing element, it is desirable to add
a spring for ranging and accurate characterization.

Without the calibration spring, Temperature effects and work hardening of


the bellows would cause a loss of accuracy.
Differential Pressure Gauge (Dual Bellows)
Dual bellows elements are also available as differential pressure sensors. Two
sealed pressure media Chambers are separated by the measuring elements.
If both operating pressures are the same, no movement occurs and no
pressure indicated. When one of the pressures is either higher or lower than
the other, reading will be indicated. If either the low or the high pressure side
is left open to the atmosphere, the unit will detect either gauge pressure or
vacuum.
Advantages of Bellows Pressure Gauge

1. High over load capacity.


2. Pressure ranges can be between 0 to 60 Mbar (i,e low and moderate
pressure
3. Moderate cost
4. Suitable for absolute pressure as well as gauge pressure

Disadvantages of Bellows Pressure Gauge


1. It requires ambient pressure compensation
2. It is not suitable for high pressure measurement

Selection of Material for Bellows


Factors affecting selection of material for bellows are:
 Strength
 Pressure range
 Hysteresis
 Corrosiveness due to environmental impacts, and
 Ease of fabrication, etc.
The Principles and Application of Pressure
Recorders/Sensors
A pressure sensor is a device that can sense a pressure signal
and convert the pressure signal into a usable output electrical
signal according to certain rules. The pressure sensor usually
consists of a pressure-sensitive element and a signal processing
unit. According to different test pressure types, pressure sensors
can be divided into gauge pressure sensors, differential pressure
sensors and absolute pressure sensors.
The pressure sensor is the most commonly used sensor in
industrial construction. It is used in various industrial automatic
control environments, including water conservancy and
hydropower, railway transportation, intelligent buildings,
production automatic control, aerospace, military industry,
petrochemical, oil wells, power, ships, machine tools, pipeline,
and many other industries .
1. Piezoelectric pressure sensors
The main working principle of the piezoelectric pressure sensor is
the piezoelectric effect. The piezoelectric materials mainly used
in piezoelectric sensors include quartz, potassium sodium
tartrate, and dihydrogen phosphate. Among them, quartz/silica is
a natural crystal. The piezoelectric effect is found in this crystal.
Within a certain temperature range, the piezoelectric property
always exists. After the temperature exceeds this range, the
piezoelectric property completely disappears. High temperature is
the so-called Curie point. Because the change of the electric field
is not obvious with the change of stress, quartz is gradually
replaced by other piezoelectric crystals. The piezoelectric effect is
applied to polycrystals, such as piezoelectric ceramics. They
include barium titanate piezoelectric ceramics, PZT, niobate
piezoelectric ceramics, lead niobate piezoelectric ceramics, etc.
Piezoelectric sensors are mainly used in the measurement of
acceleration, pressure, and force. The piezoelectric acceleration
sensor is a commonly used accelerometer. It has the
characteristics of simple structure, small size, lightweight, and
long service life. Piezoelectric acceleration sensors have been
widely used in the measurement of vibration and shock in
airplanes, automobiles, ships, bridges, and buildings, especially in
the aviation and aerospace fields.
Piezoelectric effect: When certain dielectrics are deformed by
external forces in a specific direction, polarization will occur inside
them. Positive and negative charges will appear on their two
opposing surfaces. When the external force is removed, it will
return to the uncharged state. This phenomenon is called a
positive piezoelectric effect. When the direction of the applied
force changes, the polarity of the charge changes accordingly.
Conversely, when an electric field is applied in the polarization
direction of the dielectric, these dielectrics will also deform. After
the electric field is removed, the deformation of the dielectric will
disappear. This phenomenon is called the inverse piezoelectric
effect. The type of sensor developed based on the dielectric
piezoelectric effect is called a piezoelectric sensor.
2. Strain gauge pressure sensors
The working principle of the metal resistance strain gauge is that
the resistance of the strain resistance adsorbed on the base
material changes with the mechanical deformation. This effect is
commonly known as the resistance strain effect. The resistance
strain gauge is a sensitive device that converts the strain change
on the test piece into an electrical signal. It is one of the main
components of the piezoresistive strain sensor.
Metal resistance strain gauge
The most commonly used resistance strain gauges are metal
resistance strain gauges and semiconductor strain gauges. There
are two kinds of metal resistance strain gauges: filament strain
gauges and metal foil strain gauges. Usually, the strain gauges
are tightly bonded to the substrate that generates mechanical
strain through a special adhesive. When the stress of the
substrate changes, the resistance strain gauges also deform
together. Then the resistance of the strain gauges changes so
that the voltage applied to the resistor changes. The change in
resistance of such strain gauges when stressed is usually small.
Generally, these strain gauges form a strain bridge. And they are
amplified by subsequent instrumentation amplifiers and then
transmitted to the processing circuit (usually A / D conversion and
CPU) display or actuator.
The internal structure of the metal resistance strain gauge: the
resistance strain gauge is composed of a base material, a metal
strain wire or a strain foil, an insulating protection sheet, and a
lead wire. According to different uses, the resistance value of the
resistance strain gauge can be designed by the designer.
However, the value range of the resistance should be noted:
the resistance value is too small, the required driving current is
too large. Under this condition, the resistance of the strain gauge
changes too much in different environments; the output zero drift
occurs and the zero adjustment circuit is too complicated. If the
resistance is too high, the impedance is too high and the ability to
resist external electromagnetic interference is poor. Generally, it
is around tens of ohms to tens of thousands of ohms.
3. Ceramic pressure sensors
The pressure acts on the front surface of the ceramic diaphragm,
causing a slight deformation of the diaphragm. The thick film
resistor is printed on the back of the ceramic diaphragm and
connected into a Wheatstone bridge. Due to the piezoresistive
effect of the varistor, the bridge generates a highly linear voltage
signal proportional to the pressure and proportional to the
excitation voltage. The standard signal is calibrated to 2.0 / 3.0 /
3.3mV / V according to the pressure range compatibility. Through
laser calibration, the sensor has high-temperature stability and
time stability. The sensor comes with a temperature
compensation of 0 ~ 70 ℃, and can directly contact most media.

Ceramic pressure sensor


Ceramic is a recognized material with high elasticity, corrosion
resistance, wear resistance, impact resistance, and
vibration resistance. The thermal stability of ceramics and its
thick film resistor make its operating temperature range up to -40
~ 135 ℃ with high accuracy and high stability of measurement.
The degree of electrical insulation is 2kV, the output signal is
strong, and the long-term stability is good. Ceramic sensors with
high characteristics and low prices will be the development
direction of pressure sensors. There is a trend to replace other
types of sensors in Europe and the United States. In China, more
and more users are using ceramic sensors to replace diffused
silicon pressure sensors.
4. Sapphire pressure sensors
Originally working with strain resistance, using silicon-sapphire as
a semiconductor-sensitive element, the sapphire pressure sensor
has unparalleled measurement characteristics. The sensor circuit
can ensure the power supply of the strain bridge circuit. It can
also convert the unbalanced signal of the strain bridge into a
unified electrical signal output. In the absolute pressure sensor
and transmitter, the sapphire sheet is connected with the ceramic
base glass solder, which acts as an elastic element. It converts
the measured pressure into strain gauge deformation, so as to
achieve the purpose of pressure measurement. Therefore,
semiconductor-sensitive components made of silicon-sapphire are
not sensitive to temperature changes. They have the very good
operating ability even under high-temperature conditions;
sapphire has strong radiation resistance. In addition, silicon-
sapphire semiconductor-sensitive components have no PN drift.

Structure of sapphire pressure sensor

5. Diffused silicon pressure sensors


The working principle of the diffused silicon pressure sensor is
also based on the piezoresistive effect. Using the principle of the
piezoresistive effect, the pressure of the measured medium
directly acts on the diaphragm of the sensor (stainless steel or
ceramic). So the diaphragm produces a micro-displacement
proportional to the pressure of the medium. And the resistance
value of the sensor also changes. Diffused silicon pressure
sensors use an electronic circuit to detect this change. They
convert and output a standard measurement signal
corresponding to this pressure.

Applications of pressure sensors


1. Pressure sensors in the weighing system
In the commercial weighing system of industrial control
technology, pressure sensing technology is increasingly used. In
many pressure control processes, it is often necessary to collect
pressure signals and convert them into electrical signals that can
be automatically controlled.
Pressure control devices made with pressure sensors are
generally called electronic weighing systems. Electronic weighing
systems are becoming more and more important as online control
tools for the flow of materials in various industrial processes. The
electronic weighing system can not only optimize production
during the product manufacturing process and improve product
quality but also collect and transmit data about material flow
during the production process to the data processing center for
online inventory control and financial settlement.
In the automatic control of the weighing process, the pressure
sensor is required to sense the gravity signal correctly. And also
its dynamic response should be good, and anti-interference
performance must be better. The signal provided by the pressure
sensor can be directly displayed, recorded, printed, and stored by
the detection system or used for feedback adjustment control.
The integration of the pressure sensor and the measurement
circuit greatly reduces the volume of the entire device. In
addition, the development of shielding technology will also
improve the anti-interference ability of the weighing pressure
sensor and the degree of automatic control.

2. Pressure sensors in the Petrochemical Industry


The pressure sensor is one of the most used measuring devices in
automatic control in the petrochemical industry. In large-scale
chemical projects, almost all applications of pressure sensors are
included: differential pressure, absolute pressure, gauge
pressure, high pressure, micro differential pressure, high
temperature, low temperature, and remote transmission flange
pressure sensors of various materials and special processing.
The demand for pressure sensors in the petrochemical industry is
mainly concentrated in three aspects: reliability, stability, and
high precision. Among them, reliability and many additional
requirements, such as range ratio, bus type, etc., depending on
the structural design of the transmitter, the level of mechanical
processing technology, and structural materials. The stability and
high accuracy of the pressure transmitter are mainly guaranteed
by the stability and measurement accuracy of the pressure
sensor.
The measurement accuracy and response speed of the pressure
sensor correspond to the measurement accuracy of the pressure
transmitter. The temperature and static pressure characteristics
and long-term stability of the pressure sensor correspond to the
stability of the pressure transmitter. The demand for pressure
sensors in the petrochemical industry is reflected in four aspects:
measurement accuracy, rapid response, temperature
characteristics, and static pressure characteristics, and long-term
stability.
The micro pressure sensor is a new type of pressure sensor
manufactured using semiconductor materials and MEMS
technology. It has the advantages of high accuracy, high
sensitivity, good dynamic characteristics, small size, corrosion
resistance, and low cost. The material of pure single crystal
silicon has little fatigue. The micro pressure sensor made of this
material has good long-term stability. At the same time, the
micro-pressure sensor is easy to integrate with the micro-
temperature sensor. So it can improve the temperature
compensation accuracy, the temperature characteristics, and
measurement accuracy of the sensor.
If two micro pressure sensors are integrated, static pressure
compensation can be realized, thereby improving the static
pressure characteristics of the pressure sensor. Today, micro
pressure sensors have many advantages that traditional pressure
sensors do not have. Micro pressure sensors can well meet the
needs of pressure sensors in the petrochemical industry.

3. Pressure sensors in water treatment


The environmental protection water treatment industry has
developed rapidly in recent years and has a bright future. In
water supply and sewage treatment, pressure sensors provide
important control and monitoring for system protection and
quality assurance.
The pressure sensor converts the pressure (generally refers to the
pressure of the liquid or gas) into an electrical signal output. The
pressure electrical signal can also be used to measure the level of
the static fluid, so it can be used to measure the liquid level. The
sensitive components of the pressure sensor are mainly
composed of a silicon cup sensitive element, silicone oil, isolation
diaphragm, and air duct. The pressure of the measured medium
is transmitted to the side of the silicon cup element through the
isolation diaphragm and silicone oil. The atmospheric reference
pressure acts on the other side of the silicon cup element through
the air duct. Silicon cup is a cup-shaped single-crystal silicon
wafer with a thin bottom. Under the pressure, the cup bottom
diaphragm produces elastic deformation with minimal
displacement. Monocrystalline silicon is an ideal elastomer. Its
deformation is strictly proportional to the pressure, and its
recovery performance is excellent.
4. Pressure sensors in the smartphone
Pressure sensors are used to measure atmospheric pressure on
smartphones, but what role does atmospheric pressure
measurement have for ordinary mobile phone users?
(1) Altitude measurement
Those who like mountain climbing are very concerned about their
height. There are two commonly used methods for measuring
altitude, one is the GPS global positioning system, and the other
is measuring atmospheric pressure, and then calculating altitude
based on the pressure value. Due to technical and other
limitations, the general error in GPS calculation of altitude is
about ten meters, and if it is in a forest or under a cliff,
sometimes GPS satellite signals cannot be received. The air
pressure method can be selected in a wider range, and the cost
can be controlled at a relatively low level. In addition, the
pressure sensor of the mobile phone like Galaxy Nexus also
includes a temperature sensor, which can capture the
temperature to correct the result to increase the accuracy of the
measurement result. Therefore, adding the pressure sensor
function on the basis of the original GPS of the smartphone can
make three-dimensional positioning more accurate.
(2) Assisted navigation
Many motorists now use mobile phones to navigate, but
navigation in viaducts often makes mistakes. For example, when
on a viaduct, GPS can't judge whether you are on the bridge or
under the bridge caused by wrong navigation. However, if a
pressure sensor is added to the mobile phone, its accuracy can be
1 meter, so that it can assist GPS to measure the altitude.
(3) Indoor Positioning
GPS signals cannot be well received indoors. When a user enters
a very thick building, the built-in sensor may lose the satellite
signal, so the user's geographic location cannot be recognized
and the vertical height cannot be sensed. If the mobile phone is
equipped with a pressure sensor and then combined with an
accelerometer, gyroscope, and other technologies, accurate
indoor positioning can be achieved.
4. Pressure sensors in the medical industry
With the development of the medical equipment market, higher
requirements are placed on the use of pressure sensors in the
medical industry, such as accuracy, reliability, stability, volume,
etc., that need to be improved. Pressure sensors have good
applications in minimally invasive catheter ablation and
temperature sensor measurement.
Minimally invasive surgery can not only reduce the trauma of the
surgical site but also greatly reduce the patient's pain. To meet
such requirements, in addition to the doctor's surgical operation
experience, we also need medical monitoring equipment. Many
medical devices used for this operation are now tiny, like various
catheters and ablation devices. Catheters include thermodilution
catheters, urethral catheters, esophageal catheters, central
venous catheters, and intracranial pressure vessels, etc.
The ability to place the sensor close to the patient is critical for
many applications, such as in dialysis applications. It is important
to accurately measure dialysate and venous pressure. The
pressure sensor must be able to accurately monitor the pressure
of the dialysate and blood to ensure that it is maintained within
the set range. This type of application requires that the sensor
must be compact and able to withstand liquid media. In many
cases, sensors that are incompatible with liquid media require
additional installation components to protect them. Liquid
medium tolerance is particularly important when monitoring
patient breathing because the sensor here must be able to
withstand the patient's cough and exhaled humid air.
5. MEMS Pressure Sensors
A MEMS (Microelectromechanical System) pressure sensor is
a thin-film element that deforms when subjected to pressure. The
strain gauge (piezoresistive sensing) can be used to measure this
deformation, or it can be measured by the change in the distance
between the two surfaces by capacitance sensing.

MEMS pressure sensor


The automotive industry remains the largest application area for
MEMS pressure sensors, accounting for 72% of its sales, followed
by medical electronics at 12%, industrial sectors at
10%. Consumer electronics and military aviation account for the
remaining 6% of the market.
In the automotive field, engine management is its main
application, including manifold air pressure sensors in gasoline
engines and common rail pressure sensors in diesel vehicles. In
order to improve the combustion situation, some organizations
are also studying pressure sensors. They can work in the cylinder
to better measure the accurate proportion of various substances
participating in the chemical reaction, and feed the data back to
the engine management system.
Due to the harsh working environment, the price of automotive
sensors is much higher than that of consumer sensors. In
addition, automotive sensors require a long time to
identify. These sensors must be able to work reliably for up to 15
years. Some sensors, such as brakes or tire pressure sensors, are
critical to car safety.
A new application of MEMS pressure sensors in automobiles is the
transmission system pressure sensing, which is usually used in
automatic devices but is also used in new dual-clutch
transmission systems. German manufacturer Bosch recently
entered the market and introduced a MEMS solution that uses oil
to protect the silicon film. Therefore, it can withstand pressures
up to 70 bars. Porous silicon MEMS devices have also been used
in current side airbag applications.
In the industrial field, the main applications of MEMS pressure
sensors include heating, ventilation, and air conditioning (HVAC),
water level measurement and various industrial processes and
control applications. For example, in addition to accurate altitude
and barometric pressure measurements, aircraft use sensors to
monitor engines, flaps, and other components.
In the past few years, the pressure sensor has made rapid
progress, which has had a positive impact on the competitive
landscape. It has introduced new players to the market and
expanded the scope of existing players in the market.

Calibration of Pressure Measuring Devices


All measuring devices used in critical applications must be
calibrated periodically to remain within tolerance of their
manufacturer’s specifications. Calibration is the comparison
of the pressure measurement of a device under test (DUT) to
a reference standard. Calibration processes are defined in
national and international standards for every measurement
category and application.
Pressure sensors can be found in instruments such
as indicators and controllers and should all be calibrated in order
to minimize the probability of erroneous readings. This page will
serve as a guide for the basics of pressure calibration and will
include how and why we calibrate, traceability and other topics
vital to understanding the pressure calibration industry.
Pressure Calibration Terminology
Having a general knowledge of these pressure calibration terms
and their definitions will help you understand the other concepts
to follow:
Accuracy vs. Uncertainty
Accuracy and uncertainty are two of the most common terms
used to determine the specification of pressure measuring and
controlling devices, however, they are often confused with each
other.
According to the International Vocabulary of Metrology (VIM),
measurement uncertainty is defined as the "parameter
associated with the result of a measurement that characterizes
the dispersion of values that could reasonably be attributed to the
measurand" or a measure of the possible error in the estimated
value as a result of the measurement. However, in day-to-day
terms, it is basically the accumulation of all the systematic
components that contribute toward the overall error in
measurement. The typical components contributing toward an
instruments’ measurement uncertainty are the defined
uncertainty of the reference instrument, effect of ambient
conditions, the intrinsic uncertainty of the instrument itself and
the deviation recorded in measurement.
Accuracy, on the other hand, is defined in the VIM as the
"closeness of agreement between a measured quantity value and
a true quantity value of a measurand." Accuracy is more of
a qualitative concept rather than a quantitative measurement.
Manufacturers often use this term to represent the standard
value of the maximum difference between measured and actual
or true values.
So what does it really mean for pressure calibration?
With pressure as a measurand, the uncertainty of the instrument
is dependent on the reference calibrator’s uncertainty,
the linearity, hysteresis, and repeatability of measurements
across measurement cycles, and the compensation for ambient
conditions such as atmospheric pressure, temperature, and
humidity. This is typically reported at a certain coverage factor.
The coverage factor determines the probability density of the
stated uncertainty using a numerical factor to derive the
expanded uncertainty. The coverage factor is usually symbolized
with a letter “k.” For example, k= 2 represents a 95% confidence
level in reporting the expanded uncertainty, while k = 3
represents a 99% confidence level. It’s typical in pressure
calibration that expanded uncertainty is reported with a
confidence level of k=2.
Accuracy being a qualitative concept allows for more flexibility in
interpretation and may lead to different definitions from different
manufacturers. As accuracy is the overall representation of the
closeness of values, it often encompasses the contributions of
measurement uncertainty, long term stability, and a guard band
over an interval of time. The purpose of this term is to provide the
user with an estimation of the overall worst-case specification of
their instrument over the stated time interval.
Precision
Precision is defined by the VIM as "closeness of agreement
between indications or measured quantity values obtained by
replicate measurements on the same or similar objects under
specified conditions." Precision is a term that defines the nature
of proximity an instrument's measurement would have between
the same measurement taken multiple times under the same
conditions, such as ambient conditions, test setup and reference
instrument used.

In pressure calibration, this is a term that plays a significant role


when performing a measurement going upscale and downscale in
pressure multiple times during the calibration. The error in the
same measurement between these cycles determines precision.
It is a specification that encompasses linearity, hysteresis, and
repeatability of the measurement.

Linearity
In an ideal world, all measuring devices are linear, i.e. the
deviation in true value and measured value throughout the range
can be represented by a straight line. Unfortunately, this isn’t
true. All measuring instruments have some level of nonlinearity
associated with them. This implies that the deviation between the
true value and measured value varies across the range.

For pressure calibration, nonlinearity is measured by going


upscale between various measuring points and comparing that to
the true output. Nonlinearity can be compensated through a few
different ways, such as best fit straight line (BFSL), zero-based
BFSL or multipoint linearization. Each method has its pros and
cons.

The best fit straight line (BFSL) method is defined by a straight


line to best represent the measuring points and their outputs
across the range. It is drawn this way to minimize the relative
error between the actual curve and the line itself. This method is
most commonly used in applications requiring low accuracy,
where the nonlinearity bandwidth is relatively higher.

Zero-based BFSL is a derived form of the BFSL method where the


line passes through the zero or the minimum point of the range to
ensure the offset of the zero point is mitigated.

Multipoint linearization is the most thorough process of the three.


This method allows the line segment between multiple points in
the range to be modified to come as close as possible to the
actual calibration curve. This approach, although tedious, ensures
the highest amount of correction toward nonlinearity. Measuring
points typically include the zero and span point and then a
multitude of different points can be selected within the range of
the DUT.

Hysteresis
Hysteresis is the maximum difference in measurement at a
specific point when the measurements are taken upscale to the
same measurements taken downscale. For pressure calibration,
hysteresis is measured at each pressure value being recorded by
increasing the pressure to full scale and then releasing it down to
minimum value. Different accreditation standards require
different procedures to calculate the overall hysteresis. As an
example, DKD-R-6 requires the upscale and downscale values to
be recorded twice each and then an aggregate hysteresis value
be derived for each pressure point.
Repeatability
Measurement repeatability is the degree in closeness between
the same measurement taken with the same procedure,
operators, system, and conditions over a short period of time. A
typical example of repeatability is a comparison of a
measurement output at one point in the range over a certain time
interval while keeping all other conditions the same including the
approach and descent to the measuring point.

Stability vs. Drift


Stability is defined by the VIM as the "property of measuring
instrument, whereby its metrological properties remain constant
in time." It can be quantified in terms of the duration of a time
interval where the stated property remains a certain value. For
calibration, stability is a part of the overall accuracy definition of
the instrument; it plays a crucial role in determining the
calibration interval of the instruments. All pressure measuring
devices drift over time from the day they were calibrated.

Often pressure calibration equipment manufacturers specify


stability as a byproduct of drifting for a specific measuring point
or multiple points in the range. For absolute pressure
instruments, this is the zero point. As a zero-point offset can
cause a vertical shift in the calibration curve, this point’s drift
over time becomes the determining factor in maintaining the
manufacturer specifications.

As Found vs. As Left Data


These terms are usually found on calibration certificates when a
device returns after being recalibrated. The simplest definition of
as-found data is the data a calibration lab finds on a device it has
just received, prior to making any adjustments or repairs. As-left
data would be what the certificate shows once the calibration is
complete and the device is ready to leave the lab.

Adjustment
As the word suggests, adjustment describes performing some
operation on the measuring system or measuring point so that it
responds with a prescribed output to the corresponding measured
value. In practice, adjustments are performed on specific
measuring points for them to respond according to the stated
manufacturer’s specifications. This is typically the minimum and
maximum points in the range, i.e. zero adjustment and span
adjustment. Adjustment is often carried out after an as-found
calibration has highlighted the measuring points not meeting the
desired specification.

TAR vs. TUR


TAR corresponds to test accuracy ratio and TUR corresponds to
test uncertainty ratio. These both represent the factor by which
the DUT is worse in accuracy or uncertainty, respectively,
compared to the reference standard used for its calibration.
These ratios are regarded as the practical standard for selecting
the optimal reference standard to calibrate the DUTs at hand.

Why Should You Calibrate?


The simple answer is that calibration ensures standardization and
fosters safety and efficiency. If you need to know the pressure of
a process or environmental condition, the sensor you are relying
on for that information should be calibrated to ensure the
pressure reading is correct, within the tolerance you deem
acceptable. Otherwise, you cannot be certain the pressure
reading accuracy is sufficient for your purpose.
A few examples might illustrate this better:
Standardization in processes
A petrochemical researcher has tested a process and determined
the most desirable chemical reaction is highly dependent on the
pressure of hydrogen gas during the reaction. Refineries use this
accepted standard to make their product in the most efficient
way. The hydrogen pressure is controlled within recommended
limits using feedback from a calibrated pressure sensor.
Refineries across the world use this recommended pressure in
identical processes. Calibration ensures the pressure is accurate
and the reaction conforms to standard practices.
Standardization in weather forecasting and climate study
The barometric pressure is a key predictor of weather and a key
data point in climate science. Pressure calibration and barometric
pressure, standardized to mean sea level, ensures that the
pressures recorded around the world are accurate and reliable for
use in forecasting and in the analysis of weather systems and
climate.
Safety
A vessel or pipe manufacturer provides standard working and
burst pressures for their products. Exceeding these pressures in a
process may cause damage or catastrophic failure. Calibrated
pressure sensors are placed within these processes to ensure
acceptable pressures are not exceeded. It is important to know
these pressures sensors are accurate in order to ensure safety.
Efficiency
Testing has proven a steam-electric generator is at its peak
efficiency when the steam pressure at the outlet is at a specific
level. Above or below this level, the efficiency drops dramatically.
Efficiency, in this case, equates directly to bottom line profits. The
tighter the pressure is held to the recommended pressure, the
more efficient the generator runs and the most cost-effective
output is assured. With a calibrated high accuracy pressure
sensor, the pressure can be held within a tight tolerance to
provide maximum efficiency and bottom-line revenue.

How Often Should You Calibrate?


The short answer is as often as you think is necessary for the
level of accuracy you need to maintain. All pressure sensors will
eventually drift away from their calibrated output. Typically it is
the zero point that drifts, this causes the whole calibration curve
to shift up or down. There can also be a span drift component
which is a shift in the slope of the curve, as seen
below:
The amount of drift and how long it will take to drift outside of
acceptable accuracy specification depends on the quality of the
sensor. Most manufacturers of pressure measuring devices will
give a calibration interval in their product datasheet. This tells the
customer how long they can expect the calibration to remain
within the accuracy specification. The calibration interval is
usually stated in days or years and is typically anywhere from 90
to 365 days. This interval is determined through statistical
analysis of products and usually represents a 95% confidence
interval. This means that statistically, 95% of the units will meet
their accuracy specification within the limits of the calibration
interval. For example, Mensor's calibration interval specification is
given as 365 or 180 days, depending on the sensor chosen.
The customer can choose to shorten or lengthen the calibration
interval once they take possession of the sensor and have
calibration data that supports the decision. By conducting an as-
found calibration at its calibration interval the sensor will be in
tolerance or out of tolerance. If it is found to be in tolerance, it
can be put back in service and checked again within another
calibration interval. If out of tolerance, offsets can be applied to
bring it back in tolerance. In this case, the next interval can be
shortened to make sure it holds its accuracy. Successive as-found
calibrations will provide a history of each individual sensor and
can be used to adjust the calibration interval based on this data
and the criticality of the application where the sensor is used.
Where is Pressure Calibration Performed?
Pressure calibrations can be performed in a laboratory
environment, a test bench, or in the field. All that is needed to
calibrate a pressure indicator, transmitter or transducer is a
regulated pressure source, a pressure standard, a way to read the
DUT, and the means to connect the DUT to the regulated
pressure source and the pressure standard. Pressure rated
tubing, fittings, and a manifold to isolate from the measured
process pressure may be the only equipment necessary to
perform the calibration.
Instruments Used in Pressure Calibration
Deciding what instrument to use for calibrating pressure
measuring devices depends on the accuracy of the DUT. For
devices that ascribe to the highest accuracy achievable,
the reference standard used to calibrate it should also have the
highest achievable accuracy.
Accuracy of DUTs can range widely but for devices with accuracy
greater than 1-5% it may not even be necessary to calibrate. It is
completely up to the application and the discretion of the user.
Calibration may not be deemed necessary for devices used only
as a visual "ballpark" indication and are not critical to any safety
or process concern. These devices may be used as a visual
estimate of the process pressures or limits being monitored. To
calibrate or not is a decision left to the owner of the device.
More critical pressure measuring instruments may require
periodic calibration because the application may require more
precision in the process pressure being monitored or a tighter
tolerance in a variable or a limit. In general, these process
instruments might have an accuracy of 0.1 to 1.0% of full scale.
The Calibrator
Common sense says the device being used to calibrate another
device should be more accurate than the device being calibrated.
A long-standing rule of thumb in the calibration industry
prescribes a 4 to 1 test uncertainty ratio (TUR) between the DUT
accuracy and the reference standard accuracy. So, for instance, a
100 psi pressure transducer with an accuracy of 0.04% full scale
(FS) would have to employ a reference standard with an accuracy
of 0.01% FS for that range.
Knowing these basics will help determine the equipment that can
deliver the accuracy necessary to achieve your calibration goals.
There are several levels of calibration that may be encountered in
a typical manufacturing or process facility, described below as
laboratory, test bench, and field. In general, individual facility
quality standards may define these differently.
Laboratory
Laboratory primary standard devices have the highest level of
accuracy and will be the devices used to calibrate all other
devices in your system. They could be deadweight testers, high
accuracy piston gauges, or pressure controllers/calibrators. The
accuracy of these devices typically range from about 0.001%
(10 ppm) of reading to 0.01% of full scale and should be traceable
to the SI units. Their required accuracy will be determined by
what they are required to calibrate to maintain a 4:1 TUR.
Adherence to the 4:1 rule can be relaxed but it must be reported
on the calibration certificate. These laboratory devices are
typically used in a controlled environment subject to the
requirements of ISO 17025, which is the guideline for general
requirements for the competence of testing and calibration
laboratories. Laboratory test standards are typically the most
expensive devices but are capable of calibration a large range of
lower accuracy devices.
Test Bench
Test bench devices are used outside of the laboratory or in an
instrument shop, and are typically used as a check or to calibrate
pressure instruments taken from the field. They possess sufficient
accuracy to calibrate lower accuracy field devices. These can be
desktop units or panel mount instruments like controllers,
indicators or even pressure transducers. These instruments are
sometimes combined into a system that includes a vacuum and
pressure source, an electrical measurement device and even a
computer for indication and recording. The pressure transducers
used in these instruments are periodically calibrated in the
laboratory to certify their level of accuracy. To maintain an
acceptable TUR with devices from the field, multiple ranges may
be necessary or devices with multiple and interchangeable
transducer ranges internally. The accuracy of these devices are
typically from 0.01% FS to 0.05% FS and are lower cost than the
higher accuracy instruments used in the laboratory.
Field
Field instruments are designed for portable use and typically have
limited internal pressure generation and the capability to attach
external higher pressure or vacuum sources. They may have
multi-function capability for measuring pressure and electrical
signals, data logging, built-in calibration procedures and
programs to facilitate field calibration, plus certifications for use
in hazardous areas. These multi-function instruments are
designed to be self-contained to perform calibrations on site with
minimal need for extraneous equipment. They typically have
accuracy from 0.025% FS to 0.05% FS. Given the multi-function
utility, these instruments are priced comparable to the
instruments used on the bench and can also be utilized in a bench
setting.
In general, what is used to calibrate your pressure instruments in
your facility will be determined by your established quality and
standard operating procedures. Starting from scratch will require
an analysis of the cost given the range and accuracy of the
pressure instruments that need to be calibrated.

How is Pressure Calibration Performed?


Understanding the process of performing a calibration can be
intimidating even after you have all of the correct equipment to
perform the calibration. The process can vary depending on
calibration environment, device under test accuracy and the
guideline followed to perform the calibration.
The calibration process consists of comparing the DUT reading to
a standard's reading and recording the error. Depending on
specific pressure calibration requirements of the quality
standards, one or more calibration points must be evaluated and
an upscale and downscale process may be required. The test
points can be at the zero and span or any combination of points in
between. The standard must be more accurate than the DUT. The
rule of thumb is that it should be four times more accurate but
individual requirements may vary from this.
Depending on the choice of the pressure standard the process will
involve the manual, semi-automatic or fully automatic recording
of pressure readings. The pressure is cycled upscale and/or
downscale to the desired pressure point in the range, and the
readings from both the pressure standard and the DUT are
recorded. These recordings are then reported in a calibration
certificate to note the deviation of the DUT from the standard.

Factors Affecting Pressure Calibration and Corrections


There are several corrections, ranging from simple to complex,
which may need to be applied during the calibration of a device
under test (DUT).
Head Height
If the reference standard is a pressure controller, the only
correction that may need to be applied is what is referred to as a
head height correction. The head height correction can be
calculated using the following formula:
( ρf - ρa )gh
Where ρf is the density of the pressure medium (kg/m 3), ρa is the
density of the ambient air (kg/m 3), g is the gravity (m/s2) and h is
the difference in height (m). Typically, if the DUT is below the
reference level, the value will be negative, and vice versa if the
DUT is above the reference level. Regardless of the pressure
medium, depending on the accuracy and resolution of the DUT, a
head height correction must be calculated. Mensor
controllers allow the user to input a head height and the
instrument will calculate the head height correction.
Sea Level
Another potentially confusing correction is what is referred to as a
sea level correction. This is most important for absolute ranges,
particularly barometric pressure ranges. Simply put, this
correction will provide a common barometric reference regardless
of elevation. This makes it easier for meteorologists to monitor
weather fronts as all of the barometers are referenced to sea
level. For an absolute sensor, as the sensor increases its altitude,
it will approach absolute zero, as expected. However, this can
become problematic for a barometric range sensor as the reading
will no longer be ~14.5 psi when vented to atmosphere. Instead,
the local barometric pressure may read ~12.0 psi. However, this
is not the case. The current barometric pressure in Denver,
Colorado, for example, will actually be closer to ~14.5 psi and not
~12.0 psi. This is because the barometric sensor has a sea level
correction applied to it. The sea level pressure can be calculated
using the following formula:
(Station Pressure / e(-elevation/T*29.263))
Where Station Pressure is the current, uncorrected barometric
reading (in Hg@0˚C), elevation is the current elevation (meters)
and T is the current temperature (Kelvin).
For everyday users of pressure controllers or gauges, those may
be the only corrections they may encounter. The following
corrections apply mainly to piston gauges and the necessity to
perform them relies on the desired target specification and
associated uncertainty.
Temperature
Another source of error in pressure calibrations are changes in
temperature. While all Mensor sensors are compensated over a
temperature range during manufacturing, this becomes
particularly important for reference standards such as piston
gauges, where the temperature must be monitored. Piston-
cylinder systems, regardless of composition (steel, tungsten
carbide, etc.), must be compensated for temperature during use
as all materials either expand or contract depending on changes
in temperature. The thermal expansion correction can be
calculated using the following formula:
1 + (αp + αc)(T - TREF )
Where αP is the thermal expansion coefficient of the piston (1/˚C)
and αC is the thermal expansion coefficient of the cylinder
(1/˚C), T is the current piston-cylinder temperature (˚C) and TREF is
the reference temperature (typically 20˚C).
As the temperature of the piston cylinder increases, the piston-
cylinder system expands, causing the area to increase, which
causes the pressure generated to decrease. Conversely, as the
temperature decreases, the piston-cylinder system contracts,
causing the area to decrease, which causes the pressure
generated to increase. This correction will be applied directly to
the area of the piston and errors will exceed 0.01% of the
indicated value if uncorrected. The thermal expansion coefficients
for the piston and cylinder are typically provided by the
manufacturer, but they can be experimentally determined.
Distortion
A similar correction that must be made to piston-cylinder systems
is referred to as a distortion correction. As the pressure increases
on the piston-cylinder system, it will cause the piston area to
increase, causing it to effectively generate less pressure. The
distortion correction can be calculated using the following
formula:
1 + λP
Where λ is the distortion coefficient (1/Pa) and P is the calculated,
or target, pressure (Pa). With increasing pressure, the piston area
increases, generating less pressure than expected. The distortion
coefficient is typically provided by the manufacturer, but it can be
experimentally determined.
Surface Tension
A surface tension correction must also be made with oil-lubricated
piston-cylinder systems as the surface tension of the fluid must
be overcome to “free” the piston. Essentially, this causes an
additional “phantom” mass load, depending on the diameter of
the piston. The effect is larger on larger diameter pistons and
smaller on smaller diameter pistons. The surface tension
correction can be calculated using the following formula:
πDT
Where D is the diameter of the piston (meters) and T is the
surface tension of the fluid (N/m). This correction is more
important at lower pressures as it becomes less with increasing
pressure.
Air Buoyancy
One of the most important corrections that must be made to
piston-cylinder systems is air buoyancy.
As introduced during the head height correction, the air
surrounding us generates pressure... think of it as a column of air.
At the same time, it also exerts an upward force on objects, much
like a stone in water weighs less than it does on dry land. This is
because the water exerts an upward force on the stone, causing
is to weigh less. The air around us does exactly the same thing. If
this correction is not applied, it can cause an error as high as
0.015% of the indicated value. Any mass, including the piston, will
need to have what is referred to as an air buoyancy correction.
The following formula can be used to calculate the air buoyancy
correction:
1 - ρa/ρm
Where ρa is the density of the air (kg/m3) and ρm is the density of
the masses (kg/m3). This correction is only necessary with gauge
calibrations and absolute by atmosphere calibrations. It is
negligible for absolute by vacuum calibrations as the ambient air
is essentially removed.
Local Gravity
The final correction and arguably the largest contributor to errors,
especially in piston-gauge systems, is a correction for local
gravity. Earth’s gravity varies across its entire surface, with the
lowest acceleration due to gravity being approximately 9.7639
m/s2 and the highest acceleration due to gravity being
approximately 9.8337 m/s2. During the pressure calculation for a
piston gauge, the local gravity may be used and a gravity
correction may not need to be applied. However, many industrial
deadweight testers are calibrated to standard gravity (9.80665
m/s2) and must be corrected. Were an industrial deadweight
tester calibrated at standard gravity and then taken to the
location with the lowest acceleration due to gravity, an error
greater than 0.4% of the indicated value would be experienced.
The following formula can be used to calculate the correction due
to gravity:
gl/gs
Where gl is the local gravity (m/s2) and gs is the standard gravity
(m/s2).
The simple formula for pressure is as follows:
P = F / A = mg / A
This is likely the fundamental formula most people think of when
they hear the word “pressure.” As we dive deeper into the world
of precision pressure measurement, we learn that this formula
simply isn't thorough enough. The formula that incorporates all of
these corrections (for gauge pressure) is as follows:
P = F / A = mg / A
mg / A = (mg ( 1 - ρa/ρm ) + πDT ) / (Ae (1 + (αp + αc )(T - TREF))( 1
+ λP)) + ( ρf - ρa )gh
Principles and Applications of Differential Pressure
Measuring Devices

Differential pressure flow measurement has become a reliable


solution that simplifies life and enhances precision across various
applications. Among these instruments, the McCrometer DP flow
meter is a standout choice, celebrated for its versatility and
compatibility with multiple industries. Whether you’re dealing
with applications that require an accuracy of ±0.5% or more
critical tasks demanding even greater precision and repeatability,
this device delivers top-notch performance.

Key Takeaways on Differential Pressure Flow Meters:


 Working Principle: Uses Bernoulli principle for high accuracy in
liquids, steam, and gases.
 Self-Conditioning: McCrometer’s V-Cone handles turbulent flow with
minimal straight-run requirements.
 Maintenance: Lack of moving parts reduces upkeep; some models
need no recalibration.
 Customization: Tailored options enhance efficiency and adaptability.
 Installation: Compact design saves space; pre-configured models ease
setup.
 Accuracy: Achieves ±0.5% accuracy; low permanent pressure loss
ensures efficient operation.
 Applications: Versatile across industries like oil & gas, LNG, mining,
and power generation.

One of the most popular differential pressure flow meters is the V-Cone.
Engineers and operators often favour the V-Cone over other DP meters,
such as the Orifice Plate and Venturi Meter, particularly in industrial and
oil and gas flow projects. Since its introduction in 1985, the V-Cone has
earned a reputation for reliability, offering a lifespan of over 25 years.

What makes the V-Cone differential pressure flow meter especially


valuable is its custom-engineered design. This allows it to meet the
specific and often complex requirements of various flow projects.
Engineers appreciate its ability to provide accurate and repeatable
measurements throughout the entire lifecycle of a facility or operation.
The V-Cone is highly adaptable, making it suitable for challenging
installations involving water, crude oil, wet gas, liquid natural gas,
steam, and more.

Differential Pressure Flow Meter (DP Flow Meter)


A differential pressure flow meter, also known as a DP flow meter or
differential flow meter, measures the flow of liquids, steam, or gases
through a pipe using the Bernoulli principle, which relates fluid velocity
to pressure. This type of meter is known for its high accuracy and
reliability.
A flow DP transmitter is a device that measures the differential pressure
and converts it into a flow rate. Mag meters are flow meters, which use
electromagnetic induction, and measure the flow rate of water-based
fluids accurately, requiring minimal maintenance due to their lack of
moving parts.
Differential Pressure
Differential pressure (DP) refers to the variance between two pressures
applied in a system. In level measurement, DP utilizes pressure
readings and fluid density to determine the level.
This method is widely adopted across various industries. In flow
measurement, differential pressure flow meters gauge fluid flow by
assessing the pressure drop across a constriction in a pipe. This
principle allows for accurate and reliable measurement of flow rates in
diverse applications.
How Differential Pressure Flow Measurement Works
Differential pressure flow measurement works by strategically placing a
constriction or obstruction in a pipe, causing a drop in pressure. This
drop is directly related to the flow rate of the fluid passing through the
pipe.
According to Bernoulli’s equation, the pressure drop across the
constriction correlates with the square of the flow rate. Pressure
sensors can measure the pressure before and after construction, so the
flow rate is always accurate. By comparing the pressures, DP flow
meters can calculate the flow rate with precision.
This method ensures a rapid response to changes in flow conditions,
regardless of the fluid’s velocity or other characteristics.

Differential pressure flow measurement provides a reliable means to


monitor and control fluid flow in various applications, from industrial
processes to environmental monitoring, by using fundamental
principles of fluid dynamics and pressure differentials.

Here are some of the standout products in our portfolio that


demonstrate the versatility and precision of DP flow meters.

Types of DP Flow Meter

V-Cone DP Flow Meter


McCrometer’s V-Cone Flow Meter is an advanced differential pressure
instrument, which is ideal for use with liquid, steam or gas media in
even the most challenging conditions.
 Minimal-to-no straight run required
 Flanged, threaded, hub or weld-end standard; other end connections on
request
 Materials of construction matched to your pipe specifications
 Up to 1,600 °F (870°C)
 Up to 20,000 PSI, ±0.5% accuracy
 0.5” – 120” and larger line sizes available
 25+ year lifespan

Wafer-Cone DP Flow Meter


The Wafer-Cone flow meter uses the same revolutionary principles as
the V-Cone. Its self-conditioning means little or no upstream or
downstream piping runs are required.
 Ideal for tight space installations
 Flangeless design with removable cone for varying beta ratios from
0.45 – 0.85
 Minimal-to-no straight run required
 304 or 316 stainless steel construction
 1” – 6” line sizes available
 Up to 20,000 PSI, ±1% accuracy
 Up to 1,600 °F (870°C)

ExactSteam Flow Meter


McCrometer’s ExactSteam V-Cone System is a complete flowmeter for
steam metering, factory configured for energy metering or mass flow.
 25+ year lifespan
 Complete flowmeter for steam metering, factory configured for energy
metering or mass flow
 Up to 50:1 turndown with stacked transmitters
 ±0.5% accuracy for primary element, ±1% for total system

VM V-Cone Flow Meter


The VM V-Cone System has an advanced differential pressure flow
sensing design. The flow meter features built-in flow conditioning for
superior accuracy. The VM V-Cone is the ideal new or retrofit flow meter
for multiple clean water and wastewater treatment applications.
 Straightening vanes to generate optimum flow profiles
 Requiring only 1.5 straight pipe diameters upstream and 0-0.5
downstream
 Compatible with line sizes 10” – 12”
 Epoxy-coated carbon steel body; all stainless available

More reliable measurements


Differential pressure meters provide precise measurements, even in
turbulent flow and diverse flow characteristics. Their unique design
ensures accuracy and consistent readings, maintaining reliability in
challenging or changing flow conditions.
McCrometer’s V-Cone DP meter is self-conditioning, allowing it to
handle turbulent flow effectively with little to no straight-run
requirements. This feature sets it apart from other DP meters that
typically need a cleaner, more laminar flow upstream of the meter for
accurate measurements.
Minimal maintenance
Since certain differential pressure flow meters lack moving parts, they
do not require the maintenance typically associated with orifice-plate or
ultrasonic meter designs.
Some models even eliminate the need for recalibration, relieving facility
managers and stakeholders from routine system maintenance duties.
Customizable solutions
DP meters offer numerous customizable aspects tailored to meet
specific customer requirements, including turndown, total pressure loss,
and valve size, all aimed at optimizing operational efficiency.
Installation flexibility
Certain differential pressure meters can be pre-configured for specific
applications, such as across filters, backflow preventers, and heat
exchangers, making them ready to install. This streamlined approach
reduces both installation time and costs while enhancing overall
productivity.
Reduced footprint
DP flow meters are built to fit in a compact space and reduce the
amount of piping that leads up to and away from the meter
This approach reduces the need for extensive piping and support
configurations, saving both space and costs without compromising on
accuracy. It also addresses critical requirements for applications where
space and weight on platforms are significant factors.
Low permanent pressure loss
Differential pressure meters ensure a consistent flow around
transitions, minimizing permanent pressure loss.
Minimal straight-pipe requirements
Unlike alternative meter devices that incur additional costs for piping
and structural support, certain DP meters can reduce both upstream
and downstream pipe requirements.
Thus, they provide smooth flow profiles across measurement zones,
ensuring accurate and reliable readings.

DP Flow Meter Applications


DP meters are versatile, operating effectively in both low and high-
temperature applications, and can be used in large pipes. They are
easy to maintain and can be constructed from various materials,
making them suitable for measuring many types of gases and liquids.
The V-Cone is crafted to excel in demanding environments across oil
and gas production, industrial settings, pulp and paper mills, mining
operations, food and beverage processing, pharmaceuticals, aerospace
applications, and more.
Oil & Gas
McCrometer’s V-Cone differential pressure flow meters are utilized
across a range of oil and gas applications globally, spanning upstream,
midstream, and downstream flow measurement projects. Whether
measuring liquid, steam, or gas flow media, the V-Cone family of
devices consistently surpasses expectations, even in the most
demanding installations.
Liquid Natural Gas (LNG) & Compressed Natural Gas
Precise measurement of liquid natural gas (LNG) and compressed
natural gas (CNG) is crucial for efficiency and safety. The V-Cone
differential pressure flow meter consistently meets and surpasses
customer expectations for accuracy throughout the production,
purification, liquefaction, transportation, and regasification processes.
Mining, Metals, and Mineral Processing
McCrometer offers a complete range of flow measurement solutions
tailored for diverse applications in the mining, metals, and minerals
sectors. The products excel in measuring everything from source and
stormwater to slurries, fuel, and steam, designed to handle challenging
flow conditions with reliability and precision.
Power Generation
Differential pressure flow meters provide the flow data needed to make
informed decisions about power plant operations – whether for cooling
water, steam, fuel, or effluent flow measurement. McCrometer has
engineered valuable solutions for power generation applications that
help save time and money.

Differential Pressure Flow Meter Elements


1. Primary element:
There are a variety of designs and options that might be considered
primary elements of a DP meter. Averaging pitot tubes, an integral flow
orifice (IFO), and wedges are all options that may be used based on
specific line sizes and flow rates. Engineers must consider the benefits
of each (such as multiple mounting configurations, greater process
control, ease of installation, maintenance cost, and reduced permanent
pressure loss) to determine which primary element they’ll use.
2. Secondary element:
The differential pressure transmitter for flow measurement is
considered the secondary element, and it measures differential
pressure flow. In some cases, this element might include a flow
computer with temperature and static pressure input.
3. Tertiary element:
Although not every DP meter includes tertiary elements, advanced
systems often do. Gas chromatographs and temperature transmitters
are considered tertiary elements.

“FAQs on DP Flow Meters


What Causes High Differential Pressure?
When there’s an increase in differential pressure at steady flow rates,
it’s typically caused by debris, fouling, or scale buildup within the flow
channels of the element.
What Causes Differential Pressure To Drop?
The drop in differential pressure is caused by the resistance and friction
loss as the medium flows through the pipeline. When fluid flows through
a restriction, it speeds up, drawing energy from its static pressure,
resulting in a line pressure drop at the constriction point. As the fluid
moves past the restriction, some of this pressure is recovered.
How Do You Adjust Differential Pressure?
Recommended straight pipe lengths typically range from 10 to 50 times
the pipe diameter upstream and 5 to 10 times downstream, depending
on the flow meter and application. Achieving accurate metering in
water and wastewater applications is often challenging due to various
factors. One significant factor is the configuration of the pipelines,
which can include elbows, valves, reducers, headers, and other
structures that create turbulence and affect reading accuracy.
To mitigate this turbulence, it is essential to use a flow-conditioning
device or long sections of straight, smooth pipe to allow turbulence to
dissipate before accurate readings can be obtained.
Does Temperature Affect Differential Pressure?
Temperature variations can negatively impact the accuracy of level
measurements.
In tank farms, it’s common to see a temperature difference between
the two process connections on a tank. Tank farms are designed to fit
as many tanks as possible into a small area, which can cause some
parts, such as the transmitters or high-side process connections, to be
in the shade while others, such as the low-side connections, are in
direct sunlight.
This setup can create a significant temperature difference between the
shaded and sunlit areas. To ensure accurate level measurements, both
process connections and capillaries need to be at the same
temperature.
What is a Good Differential Pressure?
Differential pressure is a key factor when using flow meters. The right
differential pressure ensures precise and reliable measurements, which
are crucial for your system’s performance. The ideal range for
differential pressure often depends on the type and use of the flow
meter.
If the differential pressure is too low, the signal strength may be weak,
leading to inaccurate readings. Conversely, if the pressure is too high, it
can strain the flow meter, causing wear and tear and potentially leading
to maintenance problems. The CURRENT team provides service,
maintenance, and calibration for all our flow meters.
What is an Orifice Plate with a Differential Pressure Transmitter?
The orifice plate, also known as an orifice meter, is a common type of
differential pressure meter extensively used in natural gas
measurement applications.
When using orifice plates for differential pressure measurement, it’s
primary constriction creates a pressure difference. A differential
pressure transmitter then detects this difference before and after the
constriction, converting it into a standard output signal. This setup
allows for accurate measurement and monitoring of flow rates in
various industrial applications.
Why Should a DP Transmitter Be Installed Above the Tapping Point of
the Orifice Plate When Measuring Gas?
Installing the DP transmitter above the tapping point of the orifice plate
ensures that any condensate or liquids in the gas stream do not
accumulate and affect the accuracy of the differential pressure
measurement. This placement helps maintain reliable and consistent
measurement of gas flow rates, crucial for precise monitoring and
control in various industrial applications.
For dry gases, the DP transmitter should be installed above the pipe
taps in a vertical position. This ensures that any potential liquid or
condensation in the line drains back into the pipe.
For liquids, the DP transmitter is mounted vertically below the pipe
taps. This helps maintain a consistently full line of liquid.
For condensing liquids like steam, the DP transmitter is also mounted
below the pipe taps. It requires a vertical water column that
consistently contains condensate. This maintains stable water pressure
in the impulse lines and protects the transmitter from high-temperature
steam.
Without water present, temperatures exceeding 300°F could potentially
damage the transmitter if it is mounted above the pipe. The specific
temperature at which damage might occur varies by model; however,
many transmitters are susceptible to damage at temperatures above
300°F. Always consult the transmitter’s specifications for precise
temperature limits.
What Is the Typical Lifespan of a DP Flow Meter?
DP meters like the orifice plate and Venturi meter are frequently
compared with the V-Cone for industrial, oil, and gas flow projects.
However, the V-Cone is known for its lifespan of over 25 years.
How Accurate is a DP Flowmeter?
The accuracy of a DP flowmeter can reach ±0.5% of the flow rate or
better across a broad flow range, depending on factors such as design,
calibration, and installation conditions.”

Applications of Pressure Regulators.


A pressure regulator is a valve that controls the pressure of a fluid to a
desired value, using negative feedback from the controlled pressure.
Regulators are used for gases and liquids, and can be an integral device
with a pressure setting, a restrictor and a sensor all in the one body, or
consist of a separate pressure sensor, controller and flow valve.
Two types are found: The pressure reduction regulator and the back-
pressure regulator.
 A pressure reducing regulator is a control valve that reduces the input
pressure of a fluid to a desired value at its output. It is a normally-open
valve and is installed upstream of pressure sensitive equipment. [1]
 A back-pressure regulator, back-pressure valve, pressure sustaining
valve or pressure sustaining regulator is a control valve that maintains
the set pressure at its inlet side by opening to allow flow when the inlet
pressure exceeds the set value. It differs from an over-pressure relief
valve in that the over-pressure valve is only intended to open when the
contained pressure is excessive, and it is not required to keep upstream
pressure constant. They differ from pressure reducing regulators in that
the pressure reducing regulator controls downstream pressure and is
insensitive to upstream pressure. [2] It is a normally-closed valve which
may be installed in parallel with sensitive equipment or after the
sensitive equipment to provide an obstruction to flow and thereby
maintain upstream pressure.[1]
Both types of regulator use feedback of the regulated pressure as input
to the control mechanism, and are commonly actuated by a spring
loaded diaphragm or piston reacting to changes in the feedback
pressure to control the valve opening, and in both cases the valve
should be opened only enough to maintain the set regulated pressure.
The actual mechanism may be very similar in all respects except the
placing of the feedback pressure tap. [2] As in other feedback control
mechanisms, the level of damping is important to achieve a balance
between fast response to a change in the measured pressure, and
stability of output. Insufficient damping may lead to hunting
oscillation of the controlled pressure, while excessive friction of moving
parts may cause hysteresis.

Pressure Reducing Regulator

A pressure reducing regulator's primary function is to match the flow of


gas through the regulator to the demand for fluid placed upon it, whilst
maintaining a sufficiently constant output pressure. If the load flow
decreases, then the regulator flow must decrease as well. If the load
flow increases, then the regulator flow must increase in order to keep
the controlled pressure from decreasing because of a shortage of fluid
in the pressure system. It is desirable that the controlled pressure does
not vary greatly from the set point for a wide range of flow rates, but it
is also desirable that flow through the regulator is stable and the
regulated pressure is not subject to excessive oscillation.
A pressure regulator includes a restricting element, a loading element,
and a measuring element:
 The restricting element is a valve that can provide a variable restriction
to the flow, such as a globe valve, butterfly valve, poppet valve, etc.
 The loading element is a part that can apply the needed force to the
restricting element. This loading can be provided by a weight, a spring,
a piston actuator, or the diaphragm actuator in combination with a
spring.
 The measuring element functions to determine when the inlet flow is
equal to the outlet flow. The diaphragm itself is often used as a
measuring element; it can serve as a combined element.
In the pictured single-stage regulator, a force balance is used on the
diaphragm to control a poppet valve in order to regulate pressure. With
no inlet pressure, the spring above the diaphragm pushes it down on
the poppet valve, holding it open. Once inlet pressure is introduced, the
open poppet allows flow to the diaphragm and pressure in the upper
chamber increases, until the diaphragm is pushed upward against the
spring, causing the poppet to reduce flow, finally stopping further
increase of pressure. By adjusting the top screw, the downward
pressure on the diaphragm can be increased, requiring more pressure
in the upper chamber to maintain equilibrium. In this way, the outlet
pressure of the regulator is controlled.
Single stage regulator

Single-stage pressure regulator

High pressure gas from the supply enters the regulator through the
inlet port. The inlet pressure gauge will indicate this pressure. The gas
then passes through the normally open pressure control valve orifice
and the downstream pressure rises until the valve actuating diaphragm
is deflected sufficiently to close the valve, preventing any more gas
from entering the low pressure side until the pressure drops again. The
outlet pressure gauge will indicate this pressure.
The outlet pressure on the diaphragm and the inlet pressure and
poppet spring force on the upstream part of the valve hold the
diaphragm/poppet assembly in the closed position against the force of
the diaphragm loading spring. If the supply pressure falls, the closing
force due to supply pressure is reduced, and downstream pressure will
rise slightly to compensate. Thus, if the supply pressure falls, the outlet
pressure will increase, provided the outlet pressure remains below the
falling supply pressure. This is the cause of end-of-tank dump where the
supply is provided by a pressurized gas tank. The operator can
compensate for this effect by adjusting the spring load by turning the
knob to restore outlet pressure to the desired level. With a single stage
regulator, when the supply pressure gets low, the lower inlet pressure
causes the outlet pressure to climb. If the diaphragm loading spring
compression is not adjusted to compensate, the poppet can remain
open and allow the tank to rapidly dump its remaining contents.

Double stage regulator

Two-stage pressure regulator

Two stage regulators are two regulators in series in the same housing
that operate to reduce the pressure progressively in two steps instead
of one. The first stage, which is preset, reduces the pressure of the
supply gas to an intermediate stage; gas at that pressure passes into
the second stage. The gas emerges from the second stage at a
pressure (working pressure) set by user by adjusting the pressure
control knob at the diaphragm loading spring. Two stage regulators
may have two safety valves, so that if there is any excess pressure
between stages due to a leak at the first stage valve seat the rising
pressure will not overload the structure and cause an explosion.
An unbalanced single stage regulator may need frequent adjustment.
As the supply pressure falls, the outlet pressure may change,
necessitating adjustment. In the two stage regulator, there is improved
compensation for any drop in the supply pressure.
Applications Pressure Reducing Regulators
Air compressors
Air compressors are used in industrial, commercial, and home workshop
environments to perform an assortment of jobs including blowing things
clean; running air powered tools; and inflating things like tires, balls,
etc. Regulators are often used to adjust the pressure coming out of an
air receiver (tank) to match what is needed for the task. Often, when
one large compressor is used to supply compressed air for multiple
uses (often referred to as "shop air" if built as a permanent installation
of pipes throughout a building), additional regulators will be used to
ensure that each separate tool or function receives the pressure it
needs. This is important because some air tools, or uses for compressed
air, require pressures that may cause damage to other tools or
materials.
Aircraft
Pressure regulators are found in aircraft cabin pressurization, canopy
seal pressure control, potable water systems, and waveguide
pressurization.
Aerospace
Aerospace pressure regulators have applications in propulsion
pressurant control for reaction control systems (RCS) and Attitude
Control Systems (ACS), where high vibration, large temperature
extremes and corrosive fluids are present.
Cooking
Pressurized vessels can be used to cook food much more rapidly than
at atmospheric pressure, as the higher pressure raises the boiling point
of the contents. All modern pressure cookers will have a pressure
regulator valve and a pressure relief valve as a safety mechanism to
prevent explosion in the event that the pressure regulator valve fails to
adequately release pressure. Some older models lack a safety release
valve. Most home cooking models are built to maintain a low and high
pressure setting. These settings are usually 7 to 15 pounds per square
inch (0.48 to 1.03 bar). Almost all home cooking units will employ a
very simple single-stage pressure regulator. Older models will simply
use a small weight on top of an opening that will be lifted by excessive
pressure to allow excess steam to escape. Newer models usually
incorporate a spring-loaded valve that lifts and allows pressure to
escape as pressure in the vessel rises. Some pressure cookers will have
a quick release setting on the pressure regulator valve that will,
essentially, lower the spring tension to allow the pressure to escape at
a quick, but still safe rate. Commercial kitchens also use pressure
cookers, in some cases using oil based pressure cookers to quickly deep
fry fast food. Pressure vessels of this sort can also be used
as autoclaves to sterilize small batches of equipment and in home
canning operations.
Water pressure reduction

Pressure regulator for domestic water supply


Outlet pressure is set with the blue hand wheel and shown on the
vertical scale.
A water pressure regulating valve limits inflow by dynamically changing
the valve opening so that when less pressure is on the outside, the
valve opens up fully and too much pressure on the outside causes the
valve to shut. In a no pressure situation, where water could flow
backwards, it won't be impeded. A water pressure regulating valve does
not function as a check valve.
They are used in applications where the water pressure is too high at
the end of the line to avoid damage to appliances or pipes.
Welding and cutting
Oxy-fuel welding and cutting processes require gases at specific
pressures, and regulators will generally be used to reduce the high
pressures of storage cylinders to those usable for cutting and welding.
Oxygen and fuel gas regulators usually have two stages: The first stage
of the regulator releases the gas at a constant pressure from the
cylinder despite the pressure in the cylinder becoming less as the gas is
released. The second stage of the regulator controls the pressure
reduction from the intermediate pressure to low pressure. The final flow
rate may be adjusted at the torch. The regulator assembly usually has
two pressure gauges, one indicating cylinder pressure, the other
indicating delivery pressure. Inert gas shielded arc welding also uses
gas stored at high pressure provided through a regulator. There may be
a flow gauge calibrated to the specific gas.
Propane/LP gas
All propane and LP gas applications require the use of a regulator.
Because pressures in propane tanks can fluctuate significantly with
temperature, regulators must be present to deliver a steady pressure to
downstream appliances. These regulators normally compensate for
tank pressures between 30–200 pounds per square inch (2.1–13.8 bar)
and commonly deliver 11 inches water column 0.4 pounds per square
inch (28 mbar) for residential applications and 35 inches of water
column 1.3 pounds per square inch (90 mbar) for industrial
applications. Propane regulators differ in size and shape, delivery
pressure and adjustability, but are uniform in their purpose to deliver a
constant outlet pressure for downstream requirements. Common
international settings for domestic LP gas regulators are 28 mbar
for butane and 37 mbar for propane.
Gas powered vehicles
All vehicular motors that run on compressed gas as a fuel (internal
combustion engine or fuel cell electric power train) require a pressure
regulator to reduce the stored gas (CNG or Hydrogen) pressure from
700, 500, 350 or 200 bar (or 70, 50, 35 and 20 MPa) to operating
pressure.
Recreational vehicles
For recreational vehicles with plumbing, a pressure regulator is required
to reduce the pressure of an external water supply connected to the
vehicle plumbing, as the supply may be a much higher elevation than
the campground, and water pressure depends on the height of the
water column. Without a pressure regulator, the intense pressure
encountered at some campgrounds in mountainous areas may be
enough to burst the camper's water pipes or unseat the plumbing
joints, causing flooding. Pressure regulators for this purpose are
typically sold as small screw-on accessories that fit inline with the hoses
used to connect an RV to the water supply, which are almost always
screw-thread-compatible with the common garden hose.
Breathing gas supply

Two-gauge pressure regulator connected to gas cylinder used for


breathing gas supply
Pressure regulators are used with diving cylinders for Scuba diving. The
tank may contain pressures in excess of 3,000 pounds per square inch
(210 bar), which could cause a fatal barotrauma injury to a person
breathing it directly. A demand controlled regulator provides a flow of
breathing gas at the ambient pressure (which varies by depth in the
water). Pressure reducing regulators are also use to supply breathing
gas to surface-supplied divers,[5] and people who use self-contained
breathing apparatus (SCBA) for rescue and hazmat work on land. The
interstage pressure for SCBA at normal atmospheric pressure can
generally be left constant at a factory setting, but for surface supplied
divers it is controlled by the gas panel operator, depending on the diver
depth and flow rate requirements. Supplementary oxygen for high
altitude flight in unpressurised aircraft and medical gases are also
commonly dispensed through pressure reducing regulators from high-
pressure storage.
Supplementary oxygen may also be dispensed through a regulator
which both reduces the pressure, and supplies the gas at a metered
flow rate, to be mixed with ambient air. One way of producing a
constant mass flow at variable ambient pressure is to use a choked
flow, where the flow through the metering orifice is sonic. For a given
gas in choked flow, the mass flow rate may be controlled by setting the
orifice size or the upstream pressure. To produce a choked flow in
oxygen, the absolute pressure ratio of upstream and downstream gas
must exceed 1.893 at 20 °C. At normal atmospheric pressure this
requires an upstream pressure of more than 1.013 × 1.893 = 1.918
bar. A typical nominal regulated gauge pressure from a medical oxygen
regulator is 3.4 bars (50 psi), for an absolute pressure of approximately
4.4 bar and a pressure ratio of about 4.4 without back pressure, so they
will have choked flow in the metering orifices for a downstream (outlet)
pressure of up to about 2.3 bar absolute. This type of regulator
commonly uses a rotor plate with calibrated orifices and detents to hold
it in place when the orifice corresponding to the desired flow rate is
selected. This type of regulator may also have one or two uncalibrated
takeoff connections from the intermediate pressure chamber
with diameter index safety system (DISS) or similar connectors to
supply gas to other equipment, and the high pressure connection is
commonly a pin index safety system (PISS) yoke clamp. Similar
mechanisms can be used for flow rate control for aviation and
mountaineering regulators.
Mining industry
As the pressure in water pipes builds rapidly with depth, underground
mining operations require a fairly complex water system with pressure
reducing valves. These devices must be installed at a certain vertical
interval, usually 600 feet (180 m). Without such valves, pipes could
burst and pressure would be too great for equipment operation.
Natural gas industry
Pressure regulators are used extensively within the natural gas
industry. Natural gas is compressed to high pressures in order to be
distributed throughout the country through large transmission
pipelines. The transmission pressure can be over 1,000 pounds per
square inch (69 bar) and must be reduced through various stages to a
usable pressure for industrial, commercial, and residential applications.
There are three main pressure reduction locations in this distribution
system. The first reduction is located at the city gate, whereas the
transmission pressure is dropped to a distribution pressure to feed
throughout the city. This is also the location where the odorless natural
gas is odorized with mercaptan. The distribution pressure is further
reduced at a district regulator station, located at various points in the
city, to below 60 psig. The final cut would occur at the end users
location. Generally, the end user reduction is taken to low pressures
ranging from 0.25 psig to 5 psig. Some industrial applications can
require a higher pressure.
Back-pressure regulators
 Maintain upstream pressure control in analytical or process systems
 Protect sensitive equipment from overpressure damage
 Reduce the pressure difference over a component which is not tolerant
of large pressure differences.
 Gas sales lines
 Production vessels (e.g., Separators, heater treaters or free water
knockouts)
 Vent or flare lines
Hyperbaric chambers
Where the pressure drop on a built-in breathing system exhaust system
is too great, typically in saturation systems, a back-pressure regulator
may be used to reduce the exhaust pressure drop to a safer and more
manageable pressure.
Reclaim diving helmets
The depth at which most heliox breathing mixtures are used in surface-
supplied diving is generally at least 5 bar above surface atmospheric
pressure, and the exhaust gas from the diver must pass through
a reclaim valve, which is a back-pressure valve activated by the
increase in pressure in the diver's helmet above ambient pressure
caused by diver exhalation. The reclaim gas hose which carries the
exhaled gas back to the surface for recycling must not be at too great a
pressure difference from the ambient pressure at the diver. An
additional back-pressure regulator in this line allows finer setting of the
reclaim valve for lower work of breathing at variable depths

Installation of the Pressure Recording Systems:


Pressure Recorders:
Setup: Install the recorder at the desired monitoring point, ensuring a
secure connection to the pressure source.
Chart Installation: Place a new chart on the device, aligning it with the
current time.
Operation: Activate the recorder to begin monitoring.
Reading: After the recording period, remove the chart. The pen’s trace
indicates pressure levels over time, with the chart’s time markings
helping to correlate pressure changes to specific intervals.

Indicators or Gauge System:


1. Select the Right Gauge
Before you pull out a wrench, first make sure you have the right type of
gauge for the application. The pressure gauge you choose must be the
correct one for the:
o Expected pressure range to be measured. The selected range
should be double the operating range.
o Process media compatibility.
o Process temperature
o Severe operating conditions (e.g., vibrations, pulsations, pressure
spikes).
However, even if you install the gauge perfectly, you could face the
same problems you had before the installation if the gauge isn’t the
right one for the job.
2. Apply Force on Wrench Flats
Once you’ve chosen the correct gauge, pay attention to how you install
the gauge. Rather than turning the case by hand, use an open-end
wrench and apply force to the wrench flat. Applying the force through
the case could damage the case connection as well as the gauge
internals. Not applying sufficient torque could result in leaks.
3. Seal the Deal
Notice the type of threads on the gauge before you seal it. If the gauge
has parallel threads, seal it using sealing rings, washers, or WIKA
sealing rings (crush rings). If the gauge has tapered threads, additional
means of sealing, such as PTFE tape, are recommended. This is
standard practice for any pipe fitter because tapered threads do not
provide complete sealing on their own.
4. Use a Clamp Socket or Union Nut with Straight Thread
When tapered threads are used, the installer has the luxury of adjusting
the gauge even after sufficient torque has been applied. This allows for
convenient orientation of the gauge face. However, with straight
threads the face orientation is not adjustable once it bottoms out. For
that reason, we recommend using WIKA sealing rings (crush rings)
instead of flat washers. The WIKA sealing ring allows you to correctly
orient the gauge after the socket has been seated on the sealing ring.
You start by tightening the gauge by hand. As soon as you encounter a
resistance, apply an open-end wrench to the wrench flat and continue
turning the gauge. At this point you have approximately one turn left to
put the gauge into the desired position.
5. Leave Space for Blow-out
For personnel safety, some gauges come with a safety pattern design
consisting of a solid wall between the front of the gauge and the
Bourdon tube, and a blow-out back. In the event of a pressure build-up
inside the case or a catastrophic Bourdon tube rupture, all the energy
and release of media will be directed to the back of the gauge, thus
protecting the people reading the gauge. In order for the safety device
to function properly, it is important to keep a minimum space of 1/2
inches. WIKA XSEL® process gauges come standard with integrated
pegs to insure this distance when mounting the gauge against a
surface.
6. Vent the Gauge Case
Some gauges come with a small valve on top of the case. Users who
don’t understand the purpose of the valve are confused about why it’s
included. During shipment, liquid-filled gauges can go through
temperature changes that create internal pressure build-up. This can
cause the gauge pointer to be off zero. When installing the gauge, open
the compensation valve to allow this pressure to vent. It should then be
closed again to prevent any external ingress. After you mount the
gauge, set the compensating valve from CLOSE to OPEN.
A pressure gauge can do its job only if it’s installed properly. Whether
you’re an operator or a maintenance technician, use these tips for
proper gauge installation to make sure your gauges perform as they
should. Contact WIKA’s technical support team if you have questions
about properly installing gauges.

Pressure Drain Regulator


Installing a water pressure regulator yourself may seem intimidating,
but it keeps water flowing and prevents damage to fixtures and pipes
caused by excessive pressure. Still, it’s relatively easy with the right
tools, knowledge, and patience.
Materials Needed:
 Water pressure regulator
 Pipe wrench or adjustable pliers
 Teflon tape
 Pipe cutter
 PEX or copper piping (if necessary)
 Pipe fittings (if necessary)
STEP 1. Turn Off the Main Water Supply
The first step in installing a water pressure-reducing valve is to turn off
the principal water supply. This can usually be found near the entrance
of a building and should be marked as such. To shut off the main water
supply, rotate the valve clockwise until it is fully closed.
If you are unsure if this has been done correctly, test it by turning on an
internal faucet; no water should come out if it has been closed
correctly. Once this is done, all other faucets should be turned off, and
any remaining in-use appliances should be disconnected from their
respective outlets before proceeding.
STEP 2. Remove the Old Regulator (if Applicable)
If an existing regulator is already installed at your property, then this
unit should be removed before installing the new one. To do so, begin
by detaching any connected pipes or fittings attached to either side of
the existing unit before unscrewing it from its mounting bracket
(usually located inside an adjacent wall).
Once removed, clear away any debris that may have accumulated
inside before proceeding with the installation of the new unit.
STEP 3. Measure and Cut Piping (if Necessary)
The next step is measuring where piping needs to be cut. This will
depend entirely on where you choose to mount your new unit, either
outside or inside, and what materials are used for piping.
For ease of use, it is recommended to use Teflon tape when making
connections between pipes so that they remain secure during
operation. Also, double-check measurements before cutting any piping
material.
Otherwise, you risk having too short or too long pieces, which can delay
installation or cause problems down the road.
STEP 4. Install the New Water Pressure Regulator
After all the necessary measurements have been made and any piping
cut, the new water pressure regulator can be installed. Begin by
attaching it to its mounting bracket (on an adjacent wall). Then connect
any relevant pipes or fittings to either side of the unit using a wrench or
pliers.
Before installation, attach any necessary fittings, such as washers and
gaskets. Next, use a level and straightedge to mark the mounting holes
for the new valve onto the wall or flooring where it will be installed.
Finally, secure all fittings with Teflon tape or pipe compound before
finally attaching and tightening down all nuts and bolts.
STEP 5. Connect the Piping (if Necessary)
If any existing pipes need connecting to your newly installed water
pressure regulator, you’ll need to prepare them first. Cleaning all
surfaces with steel wool or sandpaper before attaching using standard
soldered joints or compression fittings, depending on the type of metal
used in construction.
Use channel locks or adjustable wrenches to securely tighten nuts,
bolts, or clamps. Ensure all connections are airtight before connecting
up any additional pipes, such as those leading from the hot and cold
taps.
STEP 6. Turn On the Water Supply
Once all connections have been made securely and everything appears
correct, you can slowly turn back on your main water supply, ensuring
no leaks occur along the connected pipes or valves.
If everything appears ok after running for some time, check again with
a pressure gauge placed at each tap/faucet point and adjust as required
until the optimal water flow has been achieved throughout your home’s
plumbing system.

Pressure Air Dryer


The explosion-proof air dryer is a drying equipment used to handle
flammable and explosive substances. Special attention must be paid to
safety and environmental protection during the installation process. The
following are the steps and precautions for the correct installation of an
explosion-proof air dryer.
1. Equipment selection and location selection:
Before purchasing an explosion-proof air dryer, you must first select the
appropriate equipment model based on actual production needs. When
selecting equipment, factors such as material properties, output
requirements, and reliability should be considered. Then, select an
installation location for the drying equipment based on the plant
structure and ventilation conditions. Under normal circumstances,
installation of explosion-proof air dryers in areas where flammable and
explosive gases or liquids are stored should be avoided.
2. Install equipment basics:
Before installing the explosion-proof air dryer, it is necessary to ensure
that the equipment foundation is stable and reliable. Depending on the
weight and size of the equipment, adopt a suitable foundation
structure, such as a concrete foundation or a steel plate foundation, to
ensure that the equipment does not move or tilt during operation.
3. Install electrical equipment:
The operation of the explosion-proof air dryer is inseparable from the
electrical control system. During the installation process, electrical
circuits should be laid out in accordance with relevant standards and
specifications. All electrical circuits must meet explosion-proof
requirements, use explosion-proof electrical appliances and explosion-
proof cables, and the equipment must be reliably grounded.

1. Equipment selection and location selection:


Before purchasing an explosion-proof air dryer, you must first select the
appropriate equipment model based on actual production needs. When
selecting equipment, factors such as material properties, output
requirements, and reliability should be considered. Then, select an
installation location for the drying equipment based on the plant
structure and ventilation conditions. Under normal circumstances,
installation of explosion-proof air dryers in areas where flammable and
explosive gases or liquids are stored should be avoided.
2. Install equipment basics:
Before installing the explosion-proof air dryer, it is necessary to ensure
that the equipment foundation is stable and reliable. Depending on the
weight and size of the equipment, adopt a suitable foundation
structure, such as a concrete foundation or a steel plate foundation, to
ensure that the equipment does not move or tilt during operation.
3. Install electrical equipment:
The operation of the explosion-proof air dryer is inseparable from the
electrical control system. During the installation process, electrical
circuits should be laid out in accordance with relevant standards and
specifications. All electrical circuits must meet explosion-proof
requirements, use explosion-proof electrical appliances and explosion-
proof cables, and the equipment must be reliably grounded.
4. Install the fan and duct system:
The explosion-proof air dryer brings air into the drying chamber through
a fan, and then discharges the humid air through the pipe. When
installing a fan, choose an explosion-proof model that meets relevant
requirements and install it in a suitable location to ensure smooth
operation of the ventilation system. At the same time, pay attention to
the tightness of the connection between the fan and the pipe to avoid
leakage or blockage.
5. Install the drive system:
The transmission system of explosion-proof air dryers usually includes
motors, reducers and transmission belts. During the installation
process, make sure that each component is installed correctly and
adjusted and calibrated correctly. The transmission belt should be
replaced in time to ensure transmission effect and safe operation.
6. Connect the air source system:
The air source system of an explosion-proof air dryer usually includes
an air compressor and a dryer. Before connecting the air source, make
sure that the working pressure and output of the air compressor match
the requirements of the dryer. Also check the tightness of the air source
pipes and valves to ensure that the air source is supplied normally.
7. Install the control system:
The control system of explosion-proof air dryer usually consists of PLC
control and human-machine interface. During the installation process,
the control box should be installed outside the drying room to prevent
triggers, power switches and other components that are susceptible to
moisture and contamination from being directly exposed in the drying
room. At the same time, the reliability and stability of the control
system need to be ensured.
8. Other notes:
During the installation process, you also need to pay attention to the
following matters:
- Strictly follow relevant standards and specifications, and operate
according to the installation drawings and instructions provided by the
equipment manufacturer;
- Ensure that the equipment is structurally complete and free from
damage or defects;
- After installation, inspect and tighten all fasteners;
- Pay attention to safety and wear personal protective equipment, such
as hard hats, goggles and protective gloves.
In summary, the correct installation of the explosion-proof air dryer is
crucial to the operation and safety of the equipment. During the
installation process, refer to the instructions of the equipment
manufacturer, operate in accordance with standards and specifications,
and strictly abide by relevant safety requirements to ensure the normal
operation and use of the equipment.

Instrumentation Levels: Height


In this method, at first Height of the instrument is calculated for each
set of the instrument by adding backsight (BS) to the elevation of the
benchmark.
The height of the instrument is nothing but the reduced level of the top
of the levelling machine. The top level of the leveling machine has been
marked with scratch as shown in the figure below. Before calculating
the height of the instrument in surveying, we should know some
important terms used for leveling.
1) BS (Back Sight Reading)
The first reading of staff in surveying taken by the first placing of
levelling machine is known as back sight-reading. The staff is placed on
the point whose benchmark is given. Like 100 m, 150 m from sea level.
2) IS (Intermediate Sight)
All the staff reading after BS and before FS in the surveying is known as
Intermediate reading.
3) Fore Sight (FS)
The last reading of staff in surveying taken before replacing or changing
the position of the tripod or leveling is known as foresight-reading.
For example, if you were to calculate height of instrument in surveying,
first of all, we should do temporary and permanent adjustments to the
levelling. Let me tell you in short, temporary adjustment of the levelling
is to level the tripod, horizontal and vertical centering, etc. and
Permanent adjustment of the level is to focus on the staff with the
telescope head.
Now, let us go through step by step procedures to calculate the height
of the instrument.
1) fix levelling machine with accurate temporary and permanent
adjustments.
2) Put the staff on the ground who’s R.L.
3) Take a reading of staff with the help of levelling machine.
4) Now apply the formula below to calculate the height of the
instrument.
5) Now, take further reading as I.S and FS. to complete the survey.
So, Height of instrument = R.L + Staff Reading
or, H.I = R.L + BS

Instrument Levels: Weight


Weight-based level instruments sense process level in a vessel by
directly measuring the weight of the vessel. If the vessel’s empty
weight (tare weight ) is known, process weight becomes a simple
calculation of total weight minus tare weight. Obviously, weight-based
level sensors can measure both liquid and solid materials, and they
have the benefit of providing inherently linear mass storage
measurement
(Note 1). Load cells (strain gauges bonded to a steel element of
precisely known modulus) are typically the primary sensing element of
choice for detecting vessel weight. As the vessel’s weight changes, the
load cells compress or relax on a microscopic scale, causing the strain
gauges inside to change resistance. These small changes in electrical
resistance become a direct indication of vessel weight.
The following photograph shows three bins used to store powdered
milk, each one supported by pillars equipped with load cells near their
bases:

Note 1 : Regardless of the vessel’s shape or internal structure, the


measurement provided by a weight-sensing system is based on the true mass
of the stored material. Unlike height-based level measurement technologies
(float, ultrasonic, radar, etc.), no characterization will ever be necessary to
convert a measurement of height into a measurement of mass.
A close-up photograph shows one of the load cell units in detail, near the base
of a pillar:
When multiple load cells are used to measure the weight of a storage vessel,
the signals from all load cell units must be added together (“summed”) to
produce a signal representative of the vessel’s total weight. Simply measuring
the weight at one suspension point is insufficient
(Note 2) , because one can never be sure the vessel’s weight is distributed
equally amongst all the supports. If we happened to know, somehow, that the
vessel’s weight was in fact equally shared by all supports, it would be sufficient
to simply measure stress at one support to infer total vessel weight. In such an
installation, assuming three supports, the total vessel weight would be the
stress at any one support multiplied by three.
This next photograph shows a smaller-scale load cell installation used to
measure the quantity of material fed into a beer-brewing process:
Weight-based measurements are often employed where the true mass of a
quantity must be ascertained, rather than the level. So long as the material’s
density is a known constant, the relationship between weight and level for a
vessel of constant cross-sectional area will be linear and predictable. Constant
density is not always the case, especially for solid materials, and so weight-
based inference of vessel level may be problematic.
In applications where batch mass is more important than height (level),
weight-based measurement is often the preferred method for portioning
batches. You will find weight-based portion measurements used frequently in
the food processing industries (e.g. consistently filling bags and boxes with
product), and also for custody transfer of certain materials (e.g. coal and metal
ore).
One very important caveat for weight-based level instruments is to isolate the
vessel from any external mechanical stresses generated by pipes or
machinery. The following illustration shows a typical installation for a weight-
based measurement system, where all pipes attaching to the vessel do so
through flexible couplings, and the weight of the pipes themselves is borne by
outside structures through pipe hangers:
Stress relief is very important because any forces acting upon the storage
vessel will be interpreted by the load cells as more or less material stored in
the vessel. The only way to ensure that the load cell’s measurement is a direct
indication of material held inside the vessel is to ensure that no other forces
act upon the vessel except the gravitational weight of the material.
A similar concern for weight-based batch measurement is vibration produced
by machinery surrounding (or on) the vessel. Vibration is nothing more than
oscillatory acceleration, and the acceleration of any mass produces a reaction
force (F = ma). Any vessel suspended by weight sensing elements such as load
cells will induce oscillating forces on those load cells if shaken by vibration.
This concern in particular makes it quite difficult to install and operate
agitators or other rotating machinery on a weighed vessel.
An interesting problem associated with load cell measurement of vessel weight
arises if there are ever electric currents traveling through the load cell(s). This
is not a normal state of affairs, but it can happen if maintenance workers
incorrectly attach arc welding equipment to the support structure of the
vessel, or if certain electrical equipment mounted on the vessel such as lights
or motors develop ground faults. The electronic amplifier circuits interpreting a
load cell’s resistance will detect voltage drops created by such currents,
interpreting them as changes in load cell resistance and therefore as changes
in material level. Sufficiently large currents may even cause permanent
damage to load cells, as is often the case when the currents in question are
generated by arc welding equipment.
A variation on this theme is the so-called hydraulic load cell which is a piston-
and-cylinder mechanism designed to translate vessel weight directly into
hydraulic (liquid) pressure. A normal pressure transmitter then measures the
pressure developed by the load cell and reports it as material weight stored in
the vessel. Hydraulic load cells completely bypass the electrical problems
associated with resistive load cells, but are more difficult to network for the
calculation of total weight (using multiple cells to measure the weight of a
large vessel).

Instrumentation Levels: Volume


Volume is the space occupied by a quantity of material and often the level is
used to calculate the volume. Volume is typically expressed in gallons, liters,
cubic centimeters, cubic feet, or barrels. Volume is the measurement most
commonly derived from level.
Volume is usually determined by first measuring the level in a tank and then
calculating the volume based upon the tank geometry.
Many level-measurement devices store the level/ volume relationship for
common tank geometries in their electronic components, which enables them
to calculate a direct volume output.
In other cases, the volume may be calculated in a Distributed Control System
(DCS) or a programmable logic controller (PLC) or determined from a look-up
table that relates level to volume.
Tanks with dished ends do not have a standard shape. Therefore, the volume
of these tanks cannot be determined strictly from geometry. Instead, strapping
tables are used to determine volume.
Strapping tables
Calculating volume from level and tank geometry provides a volume
measurement accurate enough for most users’ needs. However, in some
instances, the geometry of the tank may be irregular, which makes it nearly
impossible to model the relationship between level and volume
mathematically. In such cases, volume must be determined from the level
reading through the use of a strapping table.
A strapping table is a look-up table that relates level to volume for several
discrete points in a tank. Strapping tables are usually derived by adding a
known volume of product to a tank and then measuring the level of product
that corresponds to that volume (manual strapping). The volume and level
measurements are recorded in a strapping table. Then, when a volume
measurement is required, level is measured and looked up in the strapping
table to find the corresponding volume. Strapping tables can just be a few
points to accommodate a tank shape or they can be hundreds of points. Larger
numbers of points are used with larger tanks that tend to bulge when filled. If a
measured level falls between two points in a table, volume is determined by
interpolating the two points. Typically, strapping tables have a higher
concentration of points in tank regions where the relationship between level
and volume is not linear.
Mass
Mass, the amount of matter an object contains, is often equivocated to weight.
Mass is typically expressed in terms of kilograms, grams, tons, or pounds.
Mass is unaffected by temperature. Thus, 60 lb (27.2 kg) of oil at 50 °F (10 °C)
is still 60 lb at 86 °F (30 °C). However, the overall volume of the oil may
change due to expansion.
If density is known, mass can be found from a level measurement by first
finding volume and then using the following equation:
Mass=Density x Volume
Some level-measurement devices measure mass directly (e.g. load cells).
Interface Measurement
An interface is the boundary between two immiscible (incapable of being
mixed) fluids with different densities (e.g. oil and water). An interface
measurement finds the boundary between two liquids stored in the same tank,
each with a different density. For example, when oil and water occupy the
same vessel, the oil floats on top of the water. The interface between the two
fluids is the upper level of the water and the lower level of the oil.
Interface is often used when a user has two fluids in a tank and wants to pour
off only the top fluid. The interface measurement indicates when to stop.

Interface measurement can also be used in a separator, where the interface is


used to control the flow of the top and bottom fluids out of the vessel with
minimum contamination.

Liquid versus gas pressure


The factors that influence the pressure of a liquid are different from the factors
that influence the pressure of a gas. When measuring pressure, it is important
to understand the pressure properties of liquids and gases. The hydrostatic
pressure exerted by a liquid is influenced by three factors:
 height of liquid in a column
 density of the liquid
 pressure on the surface of the liquid (vapor space)
The pressure at the bottom of a column of liquid increases as the height of the
liquid in the column increases. Pressure is affected by the height rather than
the volume of a liquid.
Unlike a liquid, a gas exerts equal pressure on all parts of the container in
which it is held. Two factors affect the pressure exerted by a gas:

An electronic/digital connection between two pressure transmitters in a dP


level system helps eliminate problems with impulse lines, such as temperature
effects, clogging, or freezing in the winter.

 volume of the container in which the gas is held


 temperature of the gas
The relationship between pressure, temperature, and volume of a gas can be
determined by applying the ideal gas law.
Ideal gas law: PV = nRT
Gas pressure is affected by changes in temperature. If the volume of the
vessel holding a gas and the amount of gas are unchanged, the pressure
exerted by the gas on the vessel walls will change in proportion to changes in
the temperature of the gas. Or simply stated, by measuring the pressure and
temperature in the tank, you can measure the gas.
Measuring flow
A common use of a pressure measurement is to infer a fluid’s flow rate through
a pipe. As a fluid flows through a pipe restriction, the fluid pressure drops. The
pressure of the fluid flowing through a pipe is greater on the upstream side of
the restriction and lower on the downstream side.
Because hydrostatic pressure is directly proportional to the height of the liquid,
differential pressure measurements can be used to report levels.

If pressure is measured before and after the restriction in the pipe (e.g., a flow
element such as an orifice plate, venturi tube, flow nozzle, wedge, or annubar),
the square root of the pressure drop is proportional to the flow rate of the fluid
through the pipe.
Measuring level
The level of a liquid in a tank or vessel can be determined from a pressure
measurement by this equation:
height = pressure/liquids specific gravity
Closed tanks or vessels need a dP transmitter to account for the vapor space
pressure. Open tanks or vessels need an absolute pressure transmitter or a dP
transmitter with the low-pressure side vented to the atmosphere.
Measuring density
Pressure is equal to the height of the column of liquid being measured
multiplied by the specific gravity of the liquid. Therefore, if the height of the
column is a known constant, as in the case of the distance between two
pressure-measurement points on a vessel, the density can be inferred from the
pressure reading using the following equation:

Approximately half of all flow measurements are made by inferring the flow
rate from a differential pressure measurement.
specific gravity = pressure/height of liquid (level)
Specific gravity values can then be converted to density or mass-per-unit-of-
volume units such as grams per cubic centimeter. Density measurements are
often used in the brewing industry to determine stages of fermentation.
Even after decades of use in industrial applications, pressure measurement
technologies continue to advance. These advancements make traditional
measurements easier, and often allow measurements in areas where they
were not possible before.

DESIGN AND APPLICATION OF SIGHT GLASSES


The oil and gas industry is one of the most critical sectors in global energy
production. To ensure efficient and safe operations, various equipment and
technologies are employed. Among these essential components are sight
glasses, which play a vital role in monitoring processes and enhancing safety
within the industry. Sight glasses provide visual access to observe fluids,
gases, and their behaviors in equipment and pipelines. This article explores the
significance, types, and applications of sight glasses in the oil and gas industry.

A sight glass, also known as a sight window, viewport, or sight port window, is
a specialized type of transparent visual window for observing working fluids at
high temperatures and pressures. Widely used in tanks, process vessels,
boilers, reactors, and many other industrial equipment. Sight glasses are made
from robust glass materials that can resist such extreme environments. One
can use these sight glasses to look at the process inside the equipment safely.
Resembling a glass disc between two metal frames, sight glasses are strong
enough to break or usual failures
Importance of Sight Glasses
In oil and gas facilities, the ability to visually inspect processes, fluids, and
equipment is invaluable. Sight glasses allow operators and maintenance
personnel to monitor
 fluid levels,
 flow rates,
 color changes,
 phase separations,
 production stages,
 fluid quality, and
 The presence of contaminants or bubbles.
These observations help identify potential issues, such as leaks, pressure
imbalances, or equipment malfunctions without having to open the equipment
which allows for timely interventions and preventing costly downtime or
hazardous situations. The figure below shows some typical sight glasses:
Typical Sight Glasses

Types of Sight Glasses


Transparent Sight Glasses:
These sight glasses are made from materials such as tempered borosilicate glass or
polycarbonate. They offer excellent visibility, high resistance to chemicals and
thermal shock, and are suitable for high-pressure applications. Transparent sight
glasses are commonly used in storage tanks, pipelines, and process vessels.
Reflex Sight Glasses:
Reflex sight glasses have a prism on one side that refracts light, resulting in a
contrast difference when viewed from different angles. The change in appearance
indicates the presence or absence of fluid within the sight glass. These glasses are
advantageous when dealing with transparent fluids, as they enhance visibility by
highlighting the fluid level. Reflex sight glasses are commonly used in low-pressure
applications and steam systems.
Magnetic Sight Glasses:
Magnetic sight glasses incorporate a magnetic strip or float inside a glass tube. As
the fluid level changes, the position of the magnetic float varies, providing a visual
indication of the level. These sight glasses are particularly useful when dealing with
opaque or hazardous fluids, as they eliminate the need for direct contact with the
medium. Magnetic sight glasses are widely used in oil separators, compressor
systems, and hydraulic reservoirs.
Again depending on the construction of the sight glasses there are two types of
sight glasses that are used in the oil and gas industries. they are:
1. Conventional glass disc window sight glass which consists of a glass
disc sandwiched between two gasketed metal rings suitable for low-
pressure and non-critical processes; and
2. A fused sight glass for high-performance applications where the glass
window is fused to the metal carrier ring during its construction.

Materials for Sight Glasses


Depending on the application environment requirements, various types of glass
materials are used for making sight glasses.
 For low-pressure-temperature, non-critical applications sight glass
windows are manufactured from standard soda lime glass. For a
temperature of up to 536°F, borosilicate glass (Pyrex) or acrylic plastics
are used for constructing sight glass windows.
 For equipment involving temperatures above 536°F, sight glass
windows are generally made using quartz or sapphire glass. these
materials are thermal shock or corrosion-resistant.
 For some sanitary processing applications, high-grade specialty plastics
such as acrylic (plexiglass or PMMA (Polymethyl methacrylate)) or
polycarbonates are used for making sight glasses.
The metallic frames are normally made from stainless steel. So the materials used
for constructing sight glasses can be summarized as follows:
 Borosilicate Glass: Borosilicate glass, such as the popular brand
Pyrex®, is widely used in sight glasses. It offers excellent clarity, high
resistance to thermal shock, and good chemical compatibility with a
wide range of fluids. Borosilicate glass is suitable for both low and high-
pressure applications and is often used in transparent sight glasses.
 Tempered Glass: Tempered glass is heat-treated glass that is
stronger and more impact-resistant than standard glass. It undergoes a
controlled heating and rapid cooling process to increase its strength.
Tempered glass sight glasses are preferred in applications where there
is a higher risk of mechanical stress or impact, such as in industrial
environments.
 Polycarbonate: Polycarbonate is a durable, lightweight, and shatter-
resistant thermoplastic material. It offers excellent impact resistance
and is often used as a safer alternative to glass in applications where
the risk of breakage is high. Polycarbonate sight glasses are commonly
used in industries where safety is a priority, such as oil and gas,
chemical processing, and food processing.
 Acrylic: Acrylic, also known as plexiglass or PMMA (Polymethyl
methacrylate), is a transparent thermoplastic material. It offers good
optical clarity, and impact resistance, and is lighter than glass. Acrylic
sight glasses are commonly used in lower-pressure and temperature
applications where chemical resistance is not a critical requirement.
 Stainless Steel: In some cases, sight glasses may have metal frames
or housings, especially when the operating conditions are severe or the
sight glass is exposed to corrosive environments. Stainless steel is
often used for such applications due to its high strength, corrosion
resistance, and durability. Stainless steel sight glasses are commonly
used in oil and gas equipment, chemical processing, and offshore
installations.
 Teflon (PTFE): Teflon, or polytetrafluoroethylene (PTFE), is a fluoropolymer
known for its excellent chemical resistance. It is often used as a gasket or
sealing material in sight glasses to provide a reliable and leak-free connection
between the glass and surrounding equipment for low-temperature
applications.
Selecting Sight Glasses
Selecting the right sight glass for a specific application requires careful
consideration of several factors. These factors include:
Fluid Compatibility:
The sight glass material must be compatible with the fluid being observed.
Some fluids may be corrosive, abrasive, or reactive with certain materials. It
is crucial to choose a sight glass material that can withstand the chemical
properties of the fluid without degradation or damage.
Pressure and Temperature:
Consider the operating pressure and temperature of the system where the
sight glass will be installed. The sight glass should be able to withstand
mechanical stress and thermal conditions (thermal shock) without
compromising its integrity or visibility. Ensure that the selected sight glass
has an appropriate pressure and temperature rating for the application.
Visibility and Clarity:
The visibility requirements of the application should be considered. Some
applications may require high clarity and transparency to observe fine details
or color changes in the fluid. In such cases, materials like borosilicate glass or
tempered glass are preferred. Assess the optical properties of the sight glass
material to ensure it provides the desired visibility.
Mechanical Strength and Impact Resistance:
Depending on the application, the sight glass may be subject to mechanical
stress or impact. Consider the robustness and strength of the selected
material to ensure it can withstand the anticipated forces without breaking or
cracking. Tempered glass or polycarbonate are often chosen for applications
requiring high-impact resistance.
Safety Considerations:
Safety is paramount in the oil and gas industry. Evaluate the risk of potential
accidents, such as glass breakage, and consider materials that minimize the
hazards associated with shattered glass. Polycarbonate or acrylic, which are
shatter-resistant materials, may be preferred in safety-critical applications.
Environmental Factors:
Assess the environmental conditions surrounding the sight glass installation.
Factors such as exposure to harsh chemicals, UV radiation, extreme weather
conditions, or abrasive particles may impact the longevity and performance
of the sight glass. Choose a material that can withstand these environmental
factors and ensure its durability.
Regulatory and Industry Standards:
Consider any applicable regulatory requirements or industry standards for
sight glass selection. Some industries, such as oil and gas, may have specific
guidelines or standards that dictate the use of certain materials or designs.
Ensure compliance with relevant regulations and standards to maintain
operational and safety standards.
Maintenance and Cleaning:
Evaluate the ease of maintenance and cleaning for the selected sight glass
material. Some materials may require specialized cleaning procedures or
have restrictions on cleaning agents. Consider the accessibility of the sight
glass for routine inspections and maintenance activities.
Specifying a Sight Glass
Specifying a sight glass involves providing detailed information and
requirements to ensure that the selected sight glass meets the specific needs
of the application. Here is the key information to specify a sight glass
effectively:
 Application Details: Clearly define the application where the sight glass will
be used. Understand the purpose of the sight glass, whether it is for
monitoring fluid levels, observing flow patterns, or detecting the presence of
contaminants. Identify the specific equipment or system where the sight glass
will be installed, such as storage tanks, pipelines, reactors, or separators.
 Operating Conditions: Provide information about the operating conditions
of the system. This includes factors such as pressure range, temperature
range, and the type of fluid or media being observed. Note any potential
variations or fluctuations in pressure or temperature that the sight glass may
encounter during operation.
 Sight Glass Type: Determine the most suitable type of sight glass based on
the application requirements and operating conditions. Consider factors such
as transparency needs, pressure rating, and the ability to detect specific fluid
properties (e.g., color changes, fluid level). Choose from options such as
transparent sight glasses, reflex sight glasses, or magnetic sight glasses.
 Material Requirements: Based on the fluid compatibility, pressure,
temperature, and visibility requirements, specify the appropriate sight glass
material. Consider factors such as chemical resistance, mechanical strength,
impact resistance, and safety considerations. Common materials include
borosilicate glass, tempered glass, polycarbonate, or acrylic.
 Specify Dimensions and Connections: Provide the required dimensions of
the sight glass, including the diameter or length of the glass tube or window.
Specify the connections or fittings needed to integrate the sight glass into the
equipment or system, such as threaded, flanged, or welded connections.
 Consider Additional Features: Determine if any additional features are
necessary for the sight glass. This may include protective coatings for
enhanced chemical resistance, special gaskets or seals, lighting
arrangements for improved visibility, or pressure relief devices. Specify any
specific requirements or preferences for these additional features.
 Compliance and Standards: Ensure that the specified sight glass complies
with applicable industry standards, regulations, and safety guidelines.
Consider any specific certifications or approvals required for the sight glass,
especially in safety-critical applications.
 Quantity and Delivery Timeframe: Specify the number of sight glasses
you require, as well as any specific delivery timeframe or schedule. This will
help the vendor provide accurate pricing and determine their ability to meet
your timeline.
 Any Other Relevant Information: Provide any additional relevant
information that might assist the vendor in understanding your requirements
accurately. This may include any specific preferences, previous experiences,
or challenges you have encountered with similar equipment.
Applications of Sight Glasses in the Oil and Gas Industry
Storage Tanks and Vessels:
Sight glasses allow operators to monitor liquid levels, flow patterns, and the
presence of sediments or contaminants in storage tanks and vessels. By
providing real-time visual information, sight glasses help optimize storage
capacity, prevent overflow or underfilling, and ensure the quality of stored
materials.
Reactors and Separators:
Sight glasses provide a direct view into reactors and separators, allowing
operators to observe chemical reactions, phase separations, and the
accumulation of impurities or byproducts. Monitoring these processes in real-
time facilitates adjustments to operating conditions and helps maintain the
efficiency and safety of the equipment.
Piping Systems:
Sight glasses installed in pipelines aid in the detection of leaks, blockages, or
changes in fluid behavior. By visually inspecting the flow, operators can
identify issues such as turbulence, cavitation, or the formation of hydrates,
enabling prompt maintenance and preventing system failures.
Offshore and Subsea Applications:
In offshore and subsea oil and gas operations, sight glasses are used to
monitor the condition of subsea equipment, pipelines, and risers. By visually
inspecting these critical components, operators can detect corrosion, fouling,
or the presence of hydrates or blockages, ensuring the integrity and reliability
of subsea infrastructure.

The industries that widely use sight glasses are:

 Oil & Gas

 Chemical & Petrochemical

 Food & Beverage

 Biofuels

 Pharmaceutical

 Utility Industry

 Biogas

 Biotech

 Wastewater Treatment
What is a Boiler Sight Glass?
A boiler sight glass, also known as a sight gauge or water level gauge, is a
transparent tubular device used to visually monitor the water level inside a
boiler or steam generator. It provides a direct indication of the water level,
allowing operators to ensure the safe and efficient operation of the boiler.
The boiler sight glass typically consists of a glass tube or window mounted on
the boiler’s water drum or steam drum. The glass tube is usually made of
tempered borosilicate glass, which is able to withstand high temperatures
and pressure. The sight glass is installed at a convenient location on the
boiler, allowing operators to easily observe the water level. The sight glass
operates on the principle of communicating vessels. As the water level inside
the boiler changes, the water level in the sight glass also rises or falls
accordingly. By observing the position of the water level in the sight glass,
operators can determine if the boiler has sufficient water for safe operation or
if it needs to be replenished.
What is a Tank Sight Glass?
A tank sight glass, also known as a liquid level gauge or tank level indicator,
is a transparent device used to visually monitor the liquid level inside a
storage tank or vessel. It provides a direct indication of the fluid level,
allowing operators to monitor inventory, prevent overfilling or underfilling,
and ensure efficient operation of the tank.

Tank sight glasses typically consist of a transparent glass or plastic tube or


window installed on the side or top of the tank. The material used for the
sight glass depends on the specific application and the properties of the fluid
being stored. The sight glass is connected to the tank through fittings or
flanges, allowing the liquid to reach the same level inside the sight glass as it
does in the tank.

Sight glasses play a crucial role in the oil and gas industry, enabling operators
and maintenance personnel to visually monitor processes, fluids, and
equipment. By providing real-time visibility, sight glasses contribute to
enhanced efficiency, improved safety, and timely interventions to prevent
costly downtime or hazardous situations. The availability of different types of
sight glasses allows for their widespread use in various applications, including
storage tanks, piping systems, reactors, and offshore operations.
Incorporating sight glasses as an integral part of oil and gas infrastructure is a
prudent investment that ensures the industry’s smooth functioning while
minimizing risks and maximizing operational performance.

Types of Level Measurement


Two methods used to measure level;
1. Direct or Mechanical method, and
2. Indirect or Inferential methods.
1. Mechanical or Direct Method
Direct level measurement is simple, almost straightforward and economical;
it uses a direct measurement of the distance (usually height) from the datum
line, and used primarily for local indication. It is not easily adopted to signal
transmission techniques for remote indication or control.
Flexible lines fitted with end weights called chains or lead lines have been
used for centuries by seafaring men to gauge the depth of water under their
ships. Steel tape having plump bob – like weights, and stored conveniently in
a reel are still used extensively for measuring level in fuel oil bunkers and
petroleum storage tanks. (see figures below)
Though crude as this methods seems, it is accurate to about 0.1% with
ranges u Although the dipstick and lead line method of level measurement
are unrivalled in accuracy, reliability, and dependability, there are drawbacks
to this technique.
First, it requires an action to be performed, thus causing the operator to
interrupt his duty to carry out this measurement. There cannot be a
continuous representation of the process measurement.
Another limitation to this measuring principle is the inability to successfully
and conveniently measures level values in pressurised vessels. These
disadvantages limit the effectiveness of these means of visual level
measurement.
b. Sight Glass
Another simple method is called sight glass (or level glass). It is quite
straightforward in use; the level in the glass seeks the same position as the
level in the tanks. It provides a continuous visual indication of liquid level in a
process vessel or a small tank and is more convenient than dip stick, dip rod
and manual gauging tapes.
Sight glass A is more suitable for gauging an open tank. A metal ball normally
used in the tube to prevent the fluid from flowing out of the gauge. Tubular
glass of this sort is available in lengths up to 70 inches and for pressure up to
600 psi. It is now seldom used.
The closed tank sight glass B, sometimes called a ‘reflex glasses, is used in
many pressurized and atmospheric processes. The greatest use is in
pressurised vessel such as boiler drums, evaporators, condensers, stills,
tanks, distillation columns, and other such applications. The length of reflex
glass gauges ranges from a few inches or eight feet, but like the tube type
gauges, they can be gauge together to provide nearly any length of level
measurement.
The simplicity and reliability of gauge type level measurement results in the
use of such devices for local indication. When level transmitters fail or must
be out of service for maintenance, or during times of power failure, this
method allow the process be measured and controlled by manual means.
However, glass elements can get dirty and are susceptible to breakage thus
presenting a safety hazard especially when hot, corrosive or flammable
liquids are being handled.
c. Chain or Float Gauge
The visual means of level measurement previously discussed are rivaled in
simplicity and dependability by float type measurement devices. Many forms
of float type instruments are available, but each uses the principle of a
buoyant element that floats on the surface of the liquid and changes position
as the liquid level varies.
Many methods have been used to give an indication of level from a float
position with the most common being a float and cable arrangement. The
operational concept of a float and cable is shown in the following diagram;

The float is connected to a pulley by a chain or a flexible cable and the


rotating member of the pulley is in turn connected to an indicating device
with measurement graduation. As can be seen, as the float moves upward,
the counterweight keeps the cable tight and the indicator moves along the
circular scale.
B. Inferential or Indirect Methods
Indirect or inferred methods of level measurement depend on the material
having a physical property which can be measured and related to level. Many
physical and electrical properties have been used for this purpose and are
well suited to producing proportional output signals for remote transmission.
This method employs even the very latest technology in its measurement.
Included in these methods are;
A. Buoyancy: the force produced by a submerged body which is equal to the
weight of the fluid it displaces.
B. Hydrostatic head: the force or weight produced by the height of the
liquid.
C. Sonar or ultrasonic: materials to be measured reflect or affect in a
detectable manner high frequency sound signals generated at appropriate
locations near the measured material.
D. Microwave: similar to ultrasonic but uses microwave instead of ultrasonic
beam.
E. Conductance: at desired points of level detection, the material to be
measured conducts (or ceases to conduct) electricity between two fixed
probe locations or between a probe and vessel wall.
F. Capacitance: the material to be measured serves as a variable dielectric
between two fixed capacitor plates. In reality, there are two substances which
form the dielectric -the material whose measurement is desired and the vapor
space above it.
The total dielectric value change as the amount of one material increases
while the other decreases.
G. Radiation: the material measured absorbs radiated energy. As in the
capacitance method, vapor space above the measured material also has an
absorbing characteristic, but the difference in absorption between the two is
great enough that the measurement can be related quite accurately to
measured material.
H. Weight: the force due to weight can be related very closely to level when
its density is constant. Variable concentrations components or temperature
variations present difficulties, however.
I. Resistance: Pressure of the measured material squeezes two narrowly
separated conductors together, reducing overall circuit resistance in an
amount proportional to level.
J. Micro-Impulse: “time-of-flight”, electrical pulses launch and travels back
in frequency directly proportional to the level of the liquid.

OPERATIONS AND USES OF BUOYANCY AND FLOAT – OPERATED


GAUGES

Displacer level instruments exploit Archimedes’ Principle to detect liqu


id level by continuously measuring the weight of an object (called the
displacer ) immersed in the process liquid. As liquid level increases, the
displacer experiences a greater buoyant force, making it appear lighter to the
sensing instrument, which interprets the loss of weight as an increase in level
and transmits a proportional output signal. In practice a displacer level
instrument usually takes the following form. Process piping in and out of the
vessel has been omitted for simplicity – only the vessel and its displacer level
instrument are shown:
The displacer itself is usually a sealed metal tube, weighted sufficiently so it
cannot float in the process liquid. It hangs within a pipe called a “cage”
connected to the process vessel through two block valves and nozzles. These
two pipe connections ensure the liquid level inside the cage matches the
liquid level inside the process vessel, much like a sightglass.

If liquid level inside the process vessel rises, the liquid level inside the cage
rises to match. This will submerge more of the displacer’s volume, causing a
buoyant force to be exerted upward on the displacer. Remember that that
displacer is too heavy to float, so it does not “bob” on the surface of the liquid
nor does it rise the same amount as the liquid’s level – rather, it hangs in
place inside the cage, becoming “lighter” as the buoyant force increases. The
weight-sensing mechanism detects this buoyant force when it perceives the
displacer becoming lighter, interpreting the decreased (apparent) weight as
an increase in liquid level. The displacer’s apparent weight reaches a
minimum when it is fully submerged, when the process liquid has reached the
100% point inside the cage.

It should be noted that static pressure inside the vessel will have negligible
effect on a displacer instrument’s accuracy. The only factor that matters is
the density of the process fluid, since buoyant force is directly proportional to
fluid density (F = γV ).

Two photos of a disassembled Level-Trol displacer instrument appear here,


showing how the displacer fits inside the cage pipe:

The cage pipe is coupled to the process vessel through two block valves,
allowing isolation from the process. A drain valve allows the cage to be
emptied of process liquid for instrument service and zero calibration.

Some displacer-type level sensors do not use a cage, but rather hang the
displacer element directly in the process vessel. These are called “cageless”
sensors. Cageless instruments are of course simpler than cage-style
instruments, but they cannot be serviced without de-pressurizing (and
perhaps even emptying) the process vessel in which they reside. They are
also susceptible to measurement errors and “noise” if the liquid inside the
vessel is agitated, either by high flow velocities in and out of the vessel, or by
the action of motor-turned impellers installed in the vessel to provide
thorough mixing of the process liquid(s).
Full-range calibration may be performed by flooding the cage with process
liquid (a wet calibration), or by suspending the displacer with a string and
precise scale (a dry calibration), pulling upward on the displacer at just the
right amount to simulate buoyancy at 100% liquid level:

Calculation of this buoyant force is a simple matter. According to Archimedes’


Principle, buoyant force is always equal to the weight of the fluid volume
displaced. In the case of a displacer-based level instrument at full range, this
usually means the entire volume of the displacer element is submerged in the
liquid. Simply calculate the volume of the displacer (if it is a cylinder, V = πr 2l,
where r is the cylinder radius and l is the cylinder length) and multiply that
volume by the weight density (γ):

Fbuoyant = γV

Fbuoyant = γπr2l

For example, if the weight density of the process fluid is 57.3 pounds per
cubic foot and the displacer is a cylinder measuring 3 inches in diameter and
24 inches in length, the necessary force to simulate a condition of buoyancy
at full level may be calculated as follows:

Note how important it is to maintain consistency of units! The liquid density


was given in units of pounds per cubic foot and the displacer dimensions in
inches, which would have caused serious problems without a conversion
between feet and inches. In my example work, I opted to convert density into
units of pounds per cubic inch, but I could have just as easily converted the
displacer dimensions into feet to arrive at a displacer volume in units of cubic
feet.

In a “wet” calibration, the 5.63 pound buoyant force will be created by the
liquid itself, the technician ensuring there is enough liquid inside the cage to
simulate a 100% level condition. In a “dry” calibration, the buoyant force will
be simulated by tension applied upward on the displacer with a hand scale
and string, the technician pulling with an upward force of 5.63 pounds to
make the instrument “think” it is sensing 100% liquid level when in fact the
displacer is completely dry, hanging in air.
Assignment
1. Describe in details, the operations and uses of differential
pressure transmitter system for measuring level: Open tank,
closed tank (dry leg), closed tank (wet leg) and closed tank
(purged dip-pipe system).
2. Explain in details, the operation and use of electrical level
measuring devices

You might also like