WMO World Met Org V
WMO World Met Org V
WMO World Met Org V
of Observation
Volume V – Quality Assurance and Management
of Observing Systems
2023 edition
WEATHER CLIMATE WATER
WMO-No. 8
Guide to Instruments and Methods
of Observation
Volume V – Quality Assurance and Management
of Observing Systems
2023 edition
WMO-No. 8
EDITORIAL NOTE
METEOTERM, the WMO terminology database, may be consulted at https://wmo.int/wmo-community/
meteoterm.
Readers who copy hyperlinks by selecting them in the text should be aware that additional
spaces may appear immediately following http://, https://, ftp://, mailto:, and after slashes (/),
dashes (-), periods (.) and unbroken sequences of characters (letters and numbers). These spaces
should be removed from the pasted URL. The correct URL is displayed when hovering over the
link or when clicking on the link and then copying it from the browser.
WMO-No. 8
The right of publication in print, electronic and any other form and in any language is reserved by
WMO. Short extracts from WMO publications may be reproduced without authorization, provided
that the complete source is clearly indicated. Editorial correspondence and requests to publish,
reproduce or translate this publication in part or in whole should be addressed to:
ISBN 978-92-63-10008-5
NOTE
The designations employed in WMO publications and the presentation of material in this publication do
not imply the expression of any opinion whatsoever on the part of WMO concerning the legal status of any
country, territory, city or area, or of its authorities, or concerning the delimitation of its frontiers or boundaries.
The mention of specific companies or products does not imply that they are endorsed or recommended by
WMO in preference to others of a similar nature which are not mentioned or advertised.
CONTENTS
Page
FOREWORD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Page
Page
Page
Annex 5.C. Competency framework for personnel performing instrument calibrations . . . . . 110
Annex 5.D. Competency framework for personnel managing observing programmes
and networks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
References and further reading. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
FOREWORD
WMO Guides describe the practices and procedures that Members are invited to follow or
implement in establishing and conducting their arrangements for compliance with the WMO
Technical Regulations.
One longstanding publication in this series is the the Guide to Instruments and Methods of
Observation (WMO-No. 8), which was first published in 1950. The Guide is the authoritative
reference for all matters related to instrumentation and methods of observation in the context of
the WMO Integrated Global Observing System (WIGOS). Uniform, traceable and high-quality
observational data represent an essential input for most WMO applications, such as climate
monitoring, numerical weather prediction, nowcasting and severe weather forecasting, all of
which facilitate the improvement of the well-being of societies around the world.
The main purpose of the Guide is to provide guidance on the most effective practices
and procedures for undertaking meteorological, hydrological and related environmental
measurements and observations in order to meet specific requirements for different application
areas. It also provides information on the capabilities of instruments and systems that are
regularly used to perform such observations. The theoretical basis of the techniques and
observational methods is outlined in the text and supported by references and further reading
for additional background information and details.
This 2023 edition of Volume V was approved by the of the WMO Executive Council at its seventy
sixth session (EC‑76). In comparison to the 2018 edition of Volume V, this edition includes
updates to Chapter 5 – Training of instrument specialists.
On behalf of WMO, I would like to express my sincere gratitude to the Standing Committee
on Measurements, Instrumentation and Traceability of the Commission for Observation,
Infrastructure and Information Systems, and in particular, to its Expert Team on Transitioning
to Modern Measurement and Editorial Board, whose tremendous efforts have enabled the
publication of this new edition.
1.1 GENERAL
This chapter is general and covers operational meteorological observing systems of any size or
nature. Although the guidance it gives on quality management is expressed in terms that apply
to large networks of observing stations, it should be read to apply even to a single station.
Quality management
Quality management provides the principles and the methodological frame for operations, and
coordinates activities to manage and control an organization with regard to quality. Quality
assurance and quality control are the parts of any successful quality management system.
Quality assurance focuses on providing confidence that quality requirements will be fulfilled
and includes all the planned and systematic activities implemented in a quality management
system so that quality requirements for a product or service will be fulfilled. Quality control is
associated with those components used to ensure that the quality requirements are fulfilled and
includes all the operational techniques and activities used to fulfil quality requirements. This
chapter concerns quality management associated with quality control and quality assurance
and the formal accreditation of the laboratory activities, especially from the point of view of
meteorological observations of weather and atmospheric variables.
The International Organization for Standardization (ISO) 9000 family of standards is discussed
to assist understanding in the course of action during the introduction of a quality management
system in a National Meteorological and Hydrological Service (NMHS); this set of standards
contains the minimum processes that must be introduced in a quality management system
for fulfilling the requirements of the ISO 9001 standard. The total quality management
concept according to the ISO 9004 guidelines is then discussed, highlighting the views of
users and interested parties. The ISO/International Electrotechnical Commission (IEC) 17025
standard is introduced. The benefits to NMHSs and the Regional Instrument Centres (RICs)
from accreditation through ISO/IEC 17025 are outlined along with a requirement for an
accreditation process.
The ISO/IEC 20000 standard for information technology (IT) service management is introduced
into the discussion, given that every observing system incorporates IT components.
Data are of good quality when they satisfy stated and implied needs. Elsewhere in the present
Guide explicit or implied statements are given of required accuracy, uncertainty, resolution
and representativeness, mainly for the synoptic applications of meteorological data, but
similar requirements can be stated for other applications. It must be supposed that minimum
total cost is also an implied or explicit requirement for any application. The purpose of quality
management is to ensure that data meet requirements (for uncertainty, resolution, continuity,
homogeneity, representativeness, timeliness, format, and so on) for the intended application, at
a minimum practicable cost. All measured data are imperfect, but, if their quality is known and
demonstrable, they can be used appropriately.
The provision of good quality meteorological data is not a simple matter and is impossible
without a quality management system. The best quality management systems operate
continuously at all points in the whole observing system, from network planning and training,
through installation and station operations to data transmission and archiving, and they include
feedback and follow‑up provisions on timescales from near‑real time to annual reviews and
end‑to‑end process. The amount of resources required for an effective quality management
system is a proportion of the cost of operating an observing system or network and is typically a
few per cent of the overall cost. Without this expenditure, the data must be regarded as being of
unknown quality, and their usefulness is diminished.
2 GUIDE TO INSTRUMENTS AND METHODS OF OBSERVATION - VOLUME V
An effective quality management system is one that manages the linkages between preparation
for data collection, data assurance and distribution to users to ensure that the user receives
the required quantity. For many meteorological quantities, there are a number of these
preparation‑collection‑assurance cycles between the field and the ultimate distribution to the
user. It is essential that all these cycles are identified and the potential for divergence from the
required quantity minimized. Many of these cycles will be so closely linked that they may be
perceived as one cycle. Most problems occur when there are a number of cycles and they are
treated as independent of one another.
Once a datum from a measurement process is obtained, it remains the datum of the
measurement process. Other subsequent processes may verify its worth as the quantity required,
use the datum in an adjustment process to create the quality required, or reject the datum.
However, none of these subsequent processes changes the datum from the measurement
process. Quality control is the process by which an effort is made to ensure that the processes
leading up to the datum being distributed are correct, and to minimize the potential for rejection
or adjustment of the resultant datum.
Quality assurance includes explicit control of the factors that directly affect the data collected
and processed before distribution to users. For observations or measurements, this includes
equipment, exposure, measurement procedures, maintenance, inspection, calibration,
algorithm development, redundancy of measurements, applied research and training. In a
data transmission sense, quality control is the process established to ensure that for data that is
subsequently transmitted or forwarded to a user database, protocols are set up to ensure that
only acceptable data are collected by the user.
Quality control is the best‑known component of quality management systems, and it is the
irreducible minimum of any system. It consists of all the processes that are put in place to
generate confidence and ensure that the data produced will have the required quality and
also include the examination of data at stations and at data centres to verify that the data are
consistent with the quality management system goals, and to detect errors so that the data
may be either flagged as unreliable, corrected or, in the case of gross errors, deleted. A quality
management system should include procedures for feeding back into the measurement and
quality control process to prevent the errors from recurring. Quality assurance can be applied in
real‑time post measurement, and can feed into the quality control process for the next process of
a quality system, but in general it tends to operate in non‑real time.
Real‑time quality control is usually performed at the station and at meteorological analysis
centres. Delayed quality assurance may be performed at analysis centres for the compilation of a
refined database, and at climate centres or databanks for archiving. In all cases, the results should
be returned to the observation managers for follow‑up.
1.2 THE ISO 9000 FAMILY, ISO/IEC 17025, ISO/IEC 20000 AND
THE WMO QUALITY MANAGEMENT FRAMEWORK
The chapter gives an explanation of the related ISO standards and how they interconnect.
Proficiency in ISO quality systems is available through certification or accreditation, and usually
requires external auditing of the implemented quality management system. Certification implies
that the framework and procedures used in the organization are in place and used as stated.
Accreditation implies that the framework and procedures used in the organization are in place,
used as stated and technically able to achieve the required result. The assessment of technical
competence is a mandatory requirement of accreditation, but not of certification. The ISO 9001 is
a standard by which certification can be achieved by an organization, while accreditation against
the ISO/IEC 17025 is commonly required for laboratories and routine observations.
The ISO 9000 standard has been developed to assist organizations of all types and sizes to
implement and operate quality management systems. The ISO 9000 standard describes the
fundamentals of quality management systems and gives definitions of the related terms (for
example, requirement, customer satisfaction). The main concept is illustrated in Figure 1.1. The
ISO 9001 standard specifies the requirements for a quality management system that can be
certified in accordance with this standard. The ISO 9004 standard gives guidelines for continual
improvement of the quality management system to achieve a total quality management system.
The ISO 19011 standard provides the guidance on auditing the quality management system.
All these standards are described in more detail in the related documents of the WMO Quality
Management Framework.
The following eight quality management principles are the implicit basis for the successful
leadership of NMHSs of all sizes and for continual performance improvement:
(b) Leadership;
ISO 9001
Excellence
Quality management
models
systems:
– EFQM
Certification Requirements
– Malcolm
Ability to fulfil
Baldridge
customer requirements
Figure 1.1. The main concept of the ISO 9000 standards and the dependencies
4 GUIDE TO INSTRUMENTS AND METHODS OF OBSERVATION - VOLUME V
All these principles must be documented and put to practice to meet the requirements of the
ISO 9000 and 9001 standards to achieve certification. The main topic of these standards is the
process approach, which can simply be described as activities that use resources to transform
inputs into outputs.
The process‑based quality management system is simply modelled in Figure 1.2. The basic idea
is that of the mechanism likely to obtain continual improvement of the system and customer
satisfaction through measuring the process indices (for example, computing time of global
numerical weather prediction (NWP) models, customer satisfaction, reaction time, and so forth),
assessing the results, making management decisions for better resource management and
obtaining inevitably better products.
The basic requirements for a quality management system are given by this standard, including
processes for improvement and complaint management and carrying out management reviews.
These processes are normally incorporated in the quality manual. The ISO 9001 standard focuses
on management responsibility rather than technical activities.
To achieve certification in ISO 9001, six processes must be defined and documented by
the organization (NMHS), as follows:
Continual improvement
A
Management P C Measurement, analysis
of resources and improvement
D
Indices
Product Product
Requirements Satisfaction
realization
Figure 1.2. The PDCA control circuit (also named the Deming-circuit)
CHAPTER 1. QUALITY MANAGEMENT 5
Furthermore, there must be a quality manual which states the policy (for example, the goal is
to achieve regional leadership in weather forecasting) and the objectives of the organization
(for example, improved weather forecasting: reduce false warning probability) and describes
the process frameworks and their interaction. There must be statements for the following:
(a) Management;
Exclusions can be made, for example, for development (if there are no development activities in
the organization).
The documentation pyramid of the quality management system is shown in Figure 1.3. The
process descriptions indicate the real activities in the organization, such as the data‑acquisition
process in the weather and climate observational networks. They provide information on the
different process steps and the organizational units carrying out the steps, for cooperation
and information sharing purposes. The documentation must differentiate between periodic
and non‑periodic processes. Examples of periodic processes are data acquisition or forecast
dissemination. Examples of non‑periodic processes include the installation of measurement
equipment which starts with a user or component requirement (for example, the order to install
a measurement network).
Lastly, the instructions in ISO 9001 give detailed information on the process steps to be
referenced in the process description (for example, starting instruction of an automatic weather
station (AWS)). Forms and checklists are helpful tools to reduce the possibility that required tasks
will be forgotten.
The guidelines for developing the introduced quality management system to achieve business
excellence are formulated in ISO 9004. The main aspect is the change from the customer position
to the position of interested parties. Different excellence models can be developed by the
Quality manual
Process descriptions
Instructions
forms, checklists
ISO 9004 guidelines, for example, the Excellence Model of the European Foundation for Quality
Management (EFQM)1 or the Malcolm Baldrige National Quality Award.2 Both excellence models
are appropriately established and well respected in all countries of the world.
The EFQM Excellence Model contains the following nine criteria which are assessed by an expert
team of assessors:
(a) Leadership;
(b) People;
(e) Processes;
The Malcolm Baldrige model contains seven criteria similar to the EFQM Excellence
Model, as follows:
(a) Leadership;
(g) Results.
There is no certification process for this standard, but external assessment provides the
opportunity to draw comparisons with other organizations according to the excellence model
(see also Figure 1.1).
This standard is a guide for auditing management systems and does not have any regulatory
character. The following detailed activities are described for auditing the organization:
(a) Principles of auditing (ethical conduct, fair presentation, due professional care,
independence, evidence‑based approach);
1
See EFQM website at http://w ww.efqm.org.
2
See the NIST website at http://w ww.nist.gov/baldrige/.
CHAPTER 1. QUALITY MANAGEMENT 7
(c) Audit activities (initiating the audit, preparing and conducting on‑site audit activities,
preparing the audit report);
(d) Training and education of the auditors (competence, knowledge, soft skills).
The manner in which audits are conducted depends on the objectives and scope of the audit
which are set by the management or the audit client. The primary task of the first audit is to check
the conformity of the quality management system with the ISO 9001 requirements. Further
audits give priority to the interaction and interfaces of the processes.
The audit criteria are the documentation of the quality management system, the process
descriptions, the quality manual and the unique individual regulations.
The audit planning published by the organization should specify the relevant departments of the
organization, the audit criteria and the audit objectives, place, date and time to ensure a clear
assignment of the audits.
1.2.5 ISO/IEC 17025: General requirements for the competence of testing and
calibration laboratories
This set of requirements is applicable to facilities, including laboratories and testing sites, that
wish to have external accreditation of their competence in terms of their measurement and
testing processes.
The ISO/IEC 17025 standard aligns its management requirements with those of ISO 9001. This
standard is divided into two main parts: management requirements and technical requirements.
Hence, the quality management system must follow the requirements of the ISO 9001 standard,
which include described processes, a management handbook that provides a connection
between processes and goals and policy statements, and that these aspects be audited regularly.
All laboratory processes must be approved, verified and validated in a suitable manner to meet
the requirements. Furthermore, the roles of the quality management representative (quality
manager) and the head of the laboratory must be determined.
National Meteorological and Hydrological Services make use of IT equipment to obtain data from
the measuring networks to use in global or local NWP models and to provide forecasters with the
outputs of models. The recommendations of this standard are helpful for the implementation of
reliable IT services. The new ISO/IEC 20000 standard summarizes the old British standard
BS‑15000 and the IT Infrastructure Library (ITIL) recommendations. The division of requirements
follows the ITIL structure.
The ITIL elements are divided into service delivery and service support with the
following processes:
Service delivery:
Service support:
Special attention has been placed on the change‑management process, which can contain
release and configuration management. Incident and problem management is normally covered
by the implementation of a user help desk.
The WMO Quality Management Framework gives the basic recommendations that were based
on the experiences of NMHSs. The necessary conditions for successful certification against
ISO 9001 are explained in WMO (2005a, 2005b).
The Quality Management Framework is the guide for NMHSs, especially for NMHSs with little
experience in a formal quality management system. The introduction of a quality management
system is described only briefly in the following section, noting that WMO cannot carry out any
certification against ISO 9001.
Senior‑level management defines a quality policy and the quality objectives (including a quality
management commitment), and staff have to be trained in sufficient quality management topics
to understand the basis for the quality management process (see 1.2.2). Most importantly, a
project team should be established to manage the transition to a formal quality management
system including definition and analysis of the processes used by the organization.
CHAPTER 1. QUALITY MANAGEMENT 9
To assist the project team, brief instructions can be given to the staff involved in the process
definition, and these would normally include the following:
Given that the documentation specifies what the organization does, it is essential that the main
processes reflect the functions of the organization of the NMHS. These can be a part of the
named processes (see Figure 1.4), for example:
Installation, operation,
systems
development of technical
Data generation, data management
6 x consulting service
ment
Atmospheric watch
Organizational development
resources
Steering
Management of staff,
finances and procurement
Management
Internal System control Improvement
Management communicating (audits, and complaint processes
reporting reviews) management
Figure 1.4. Process landscape of an NMHS (example: Deutscher Wetterdienst; WMO, 2005a)
10 GUIDE TO INSTRUMENTS AND METHODS OF OBSERVATION - VOLUME V
(e) Research and development (global modelling, limited area models, instrumentation);
Even though these processes will meet the individual needs of NMHSs and provide them with
subprocesses, normally there should be regulations for remedying incidents (for example,
system failures, staff accidents).
The processes must be introduced into the organization with clear quality objectives, and all staff
must be trained in understanding the processes, including the use of procedures and checklists
and the measurement of process indicators.
Before applying for certification, the quality management system must be reviewed by carrying
out internal audits in the departments and divisions of the organization, to check conformity
of the quality management system as stated and as enacted. These documented reviews
can be performed on products by specialized and trained auditors. The requirements and
recommendations for these reviews are given in ISO 19011 (see 1.2.4).
The management review of the quality management system will include the following:
Accreditation requires additional processes and documentation and, most importantly, evidence
that laboratory staff have been trained and have mastered the processes and methods to be
accredited.
(d) Work instructions for all partial steps in the processes and methods;
CHAPTER 1. QUALITY MANAGEMENT 11
Since procedures and methods are likely to change more frequently than the management
aspects of the accreditation, the methods are usually not included in the management
manual. However, there is specific reference to the procedures and methods used in the
management manual.
As it is unlikely that all aspects of the accreditation will be covered once the quality management
system is introduced, it is recommended that a pre‑audit be conducted and coordinated with the
certifying agency. In these pre‑audits it would be normal for the certifying agency:
(a) Documentation;
(b) An examination of the facilities included in the scope of the accreditation (for example,
laboratories, special field sites).
(h) Proof documents (for example, that staff training has occurred and that quantities
are traceable);
(i) Records (for example, correspondence with the customer, generated calibration
certificates).
The external expert team could request additional documents, as all aspects of the ISO/IEC 17025
standard are checked and in more detail than a certification under ISO 9001.
12 GUIDE TO INSTRUMENTS AND METHODS OF OBSERVATION - VOLUME V
Besides the inspection of the measurement methods and associated equipment, the assessment
of the facilities in the scope of the accreditation will include the following:
(b) Assessment of the infrastructure that supports the methods (for example, buildings, access).
The following are also checked during the assessment to ensure that they meet the objectives
required by management for accreditation:
In addition, the assessment should verify that the laboratory has established proof of
the following:
An accreditation with suitable scope also provides commercial opportunities for the calibration,
verification and assessment of measurement devices.
For organizations that do not have a quality management system in place, the benefits of
accreditation are significant. First, it documents the organization’s system, and, through that,
a process of analysis can be used to make the organization more efficient and effective. For
example, one component of accreditation under ISO/IEC 17025 requires uncertainty analyses for
every calibration and verification test; such quantitative analyses provide information on where
the most benefit can be achieved for the least resources.
Accreditation or certification under any recognized quality framework requires registration and
periodic audits by external experts and the certifying agency. These represent additional costs
for the organization and are dependent on the scope of the accreditation and certification.
Seeking accreditation before an effective quality management system is in place will lead to an
increased use of resources and result in existing resources being diverted to establish a quality
management system; there will also be additional periodic audit costs.
CHAPTER 1. QUALITY MANAGEMENT 13
Several well‑known tools exist to assist in the processes of a quality management system and its
continuous improvement. Three examples of these tools are described below as an introduction:
the Balanced Score card, Failure Mode and Effects Analysis, and Six Sigma.
The Balanced Scorecard (Kaplan and Norton, 1996) has at a minimum four points of focus:
finances, the customer, processes and employees. Often the general public is added given that
public interests must always be taken into account.
Each organization and organization element provides key performance indicators for each of the
focus areas, which in turn link to the organization’s mission (or purpose, vision or goals) and the
strategy (or working mission and vision).
Failure Mode and Effects Analysis is a method for the examination of possible missing causes and
faults and the probability of their appearance. The method can be used for analysing production
processes and product specification. The aim of the optimization process is to reduce the risk
priority number.
The Six Sigma method was developed in the communications industry and uses statistical
process controls to improve production. The objective of this method is to reduce process failure
below a specific value.
The life history of instruments in field service involves different phases, such as planning
according to user requirements, selection and installation of equipment, operation, calibration,
maintenance and training activities. To obtain data of adequate or prescribed quality,
appropriate actions must be taken at each of these phases. Factors affecting data quality are
summarized in this section, and reference is made to more comprehensive information available
in other chapters of the present Guide and in other WMO Manuals and Guides.
User requirements. The quality of a measuring system can be assessed by comparing user
requirements with the ability of the systems to fulfil them. The compatibility of user
data‑quality requirements with instrumental performance must be considered not only
at the design and planning phase of a project, but also continually during operation, and
implementation must be planned to optimize cost/benefit and cost/performance ratios.
This involves a shared responsibility between users, instrument experts and logistic experts
to match technical and financial factors. In particular, instrument experts must study the
data quality requirements of the users to be able to propose specifications within the
technical state of the art. This important phase of design is called value analysis. If it is
neglected, as is often the case, it is likely that the cost or quality requirements, or both, will
not be satisfied, possibly to such an extent that the project will fail and efforts will have
been wasted.
may not be anticipated, causing many difficulties when they are subsequently discovered.
An example of this is an underspecification resulting in excessive wear or drift. In general,
only high quality instruments should be employed for meteorological purposes. Reference
should be made to the relevant information given in the various chapters in the present
Guide. Further information on the performance of several instruments can be found in
the reports of WMO international instrument intercomparisons and in the proceedings of
the WMO Commission for Instruments and Methods of Observation (CIMO) and other
international conferences on instruments and methods of observation.
Acceptance tests. Before installation and acceptance, it is necessary to ensure that the
instruments fulfil the original specifications. The performance of instruments, and their
sensitivity to influence factors, should be published by manufacturers and are sometimes
certified by calibration authorities. However, WMO instrument intercomparisons show
that instruments may still be degraded by factors affecting their quality which may
appear during the production and transportation phases. Calibration errors are difficult
or impossible to detect when adequate standards and appropriate test and calibration
facilities are not readily available. It is an essential component of good management to
carry out appropriate tests under operational conditions before instruments are used for
operational purposes. These tests can be applied both to determine the characteristics of a
given model and to control the effective quality of each instrument.
Compatibility. Data compatibility problems can arise when instruments with different technical
characteristics are used for taking the same types of measurements. This can happen,
for example, when changing from manual to automated measurements, when adding
new instruments of different time constants, when using different sensor shielding, when
applying different data reduction algorithms, and so on. The effects on data compatibility
and homogeneity should be carefully investigated by long‑term intercomparisons.
Reference should be made to the various WMO reports on international instrument
intercomparisons.
Siting and exposure. The density of meteorological stations depends on the timescale and
space scale of the meteorological phenomena to be observed and is generally specified
by the users, or set by WMO regulations. Experimental evidence exists showing that
improper local siting and exposure can cause a serious deterioration in the accuracy and
representativeness of measurements. General siting and exposure criteria are given in
Volume I, Chapter 1, and detailed information appropriate to specific instruments is given
in the various chapters of Volume I. Further reference should be made to the regulations in
WMO (2015). Attention should also be paid to external factors that can introduce errors,
such as dust, pollution, frost, salt, large ambient temperature extremes or vandalism.
Data acquisition. Data quality is not only a function of the quality of the instruments and their
correct siting and exposure, but also depends on the techniques and methods used to
obtain data and to convert them into representative data. A distinction should be made
between automated measurements and human observations. Depending on the technical
characteristics of a sensor, in particular its time constant, proper sampling and averaging
procedures must be applied. Unwanted sources of external electrical interference and
CHAPTER 1. QUALITY MANAGEMENT 15
noise can degrade the quality of the sensor output and should be eliminated by proper
sensor‑signal conditioning before entering the data‑acquisition system. Reference
should be made to sampling and filtering in Volume III, Chapters 1 and 2. In the case of
manual instrument readings, errors may arise from the design, settings or resolution of
the instrument, or from the inadequate training of the observer. For visual or subjective
observations, errors can occur through an inexperienced observer misinterpreting the
meteorological phenomena.
Data processing. Errors may also be introduced by the conversion techniques or computational
procedures applied to convert the sensor data into Level II or Level III data. Examples of
this are the calculation of humidity values from measured relative humidity or dew point
and the reduction of pressure to mean sea level. Errors also occur during the coding or
transcription of meteorological messages, in particular if performed by an observer.
Real‑time quality control. Data quality depends on the real‑time quality‑control procedures
applied during data acquisition and processing and during the preparation of messages,
in order to eliminate the main sources of errors. These procedures are specific to each type
of measurement but generally include gross checks for plausible values, rates of change
and comparisons with other measurements (for example, dew point cannot exceed
temperature). Special checks concern manually entered observations and meteorological
messages. In AWSs, special built‑in test equipment and software can detect specific
hardware errors. The application of these procedures is most important since some errors
introduced during the measuring process cannot be eliminated later. For an overview of
manual and automatic methods in use, refer to other paragraphs of this chapter as well as to
Volume III, Chapter 1 and WMO (1993a, 2010, 2015, 2017a).
Testing and calibration. During their operation, the performance and instrumental
characteristics of meteorological instruments change for reasons such as the ageing of
hardware components, degraded maintenance, exposure, and so forth. These may cause
long‑term drifts or sudden changes in calibration. Consequently, instruments need regular
inspection and calibration to provide reliable data. This requires the availability of standards
and of appropriate calibration and test facilities. It also requires an efficient calibration plan
and calibration housekeeping. See the present volume, Chapter 4 for general information
about test and calibration aspects and to the relevant chapters of Volume I for individual
instruments.
Maintenance. Maintenance can be corrective (when parts fail), preventive (such as cleaning
or lubrication) or adaptive (in response to changed requirements or obsolescence). The
quality of the data provided by an instrument is considerably affected by the quality of its
maintenance, which in turn depends mainly on the ability of maintenance personnel and
the maintenance concept. The capabilities, personnel and equipment of the organization
or unit responsible for maintenance must be adequate for the instruments and networks.
Several factors have to be considered, such as a maintenance plan, which includes
corrective, preventive and adaptive maintenance, logistic management, and the repair, test
and support facilities. It must be noted that the maintenance costs of equipment can greatly
exceed its purchase costs (see Volume III, Chapter 1).
Training and education. Data quality also depends on the skills of the technical staff in charge
of testing, calibration and maintenance activities, and of the observers making the
observations. Training and education programmes should be organized according to a
rational plan geared towards meeting the needs of users, and especially the maintenance
16 GUIDE TO INSTRUMENTS AND METHODS OF OBSERVATION - VOLUME V
and calibration requirements outlined above, and should be adapted to the system; this
is particularly important for AWSs. As part of the system procurement, the manufacturer
should be obliged to provide very comprehensive operational and technical documentation
and to organize operational and technical training courses (see the present volume,
Chapter 5) in the NMHS.
Metadata. A sound quality assurance entails the availability of detailed information on the
observing system itself and in particular on all changes that occur during the time of its
operation. Such information on data, known as metadata, enables the operator of an
observing system to take the most appropriate preventive, corrective and adaptive actions
to maintain or enhance data quality. Metadata requirements are further considered
in 1.9. For further information on metadata, see Volume I, Chapter 1 (Annex 1.F) and
WMO (2017b).
The Manual on the Global Observing System (WMO, 2015) prescribes that certain quality‑control
procedures must be applied to all meteorological data to be exchanged internationally. Level I
and Level II data, and the conversion from one to the other, must be subjected to quality control.
WMO (2017a) prescribes that quality‑control procedures must be applied by meteorological
data processing centres to most kinds of weather reports exchanged internationally, to check for
coding errors, internal consistency, time and space consistency, and physical and climatological
limits, and it specifies the minimum frequency and times for quality control.
Quality control, as specifically defined in 1.1, is implemented in real time or near real time to data
acquisition and processing. In practice, responsibility for quality control is assigned to various
points along the data chain. These may be at the station, if there is direct manual involvement in
data acquisition, or at the various centres where the data are processed.
Quality assurance procedures must be introduced and reassessed during the development
phases of new sensors or observing systems (see Figure 1.5).
The observer or the officer in charge at a station is expected to ensure that the data leaving the
station have been quality controlled, and should be provided with established procedures for
attending to this responsibility. This is a specific function, in addition to other maintenance and
record‑keeping functions, and includes the following:
Monitoring
Strategy of
NMSs
Internal/
external
users/
customers
NMS Evaluation of
processes requirements
No
Change
?
QA:
Preventive Development
actions
Testing Verification
QC: Data
Monitoring management/
of data cables Transfer
QC:
Consistency Database Data centres
check of data
Archiving
(b) Climatological checks: These for consistency: The observer knows, or is provided with
charts or tables of, the normal seasonal ranges of variables at the station, and should not
allow unusual values to go unchecked;
(c) Temporal checks: These should be made to ensure that changes since the last observation
are realistic, especially when the observations have been made by different observers;
(e) Checks of all messages and other records against the original data.
18 GUIDE TO INSTRUMENTS AND METHODS OF OBSERVATION - VOLUME V
At AWSs, some of the above checks should be performed by the software, as well as engineering
checks on the performance of the system. These are discussed in Volume III, Chapter 1.
The procedures for controlling the quality of upper‑air data are essentially the same as those
for surface data. Checks should be made for internal consistency (such as lapse rates and
shears), for climatological and temporal consistency, and for consistency with normal surface
observations. For radiosonde operations, it is of the utmost importance that the baseline initial
calibration be explicitly and deliberately checked. The message must also be checked against the
observed data.
The automation of on‑station quality control is particularly useful for upper‑air data.
Data should be checked in real time or as close to real time as possible, at the first and
subsequent points where they are received or used. It is highly advisable to apply the same
urgent checks to all data, even to those that are not used in real time, because later quality
control tends to be less effective. If available, automation should of course be used, but certain
quality‑control procedures are possible without computers, or with only partial assistance by
computing facilities. The principle is that every message should be checked, preferably at each
stage of the complete data chain.
The checks that have already been performed at stations are usually repeated at data centres,
perhaps in more elaborate form by making use of automation. Data centres, however, usually
have access to other network data, thus making a spatial check possible against observations
from surrounding stations or against analysed or predicted fields. This is a very powerful method
and is the distinctive contribution of a data centre.
If errors are found, the data should be either rejected or corrected by reference back to the
source, or should be corrected at the data centre by inference. The last of these alternatives may
evidently introduce further errors, but it is nevertheless valid in many circumstances; data so
corrected should be flagged in the database and should be used only carefully.
The quality‑control process produces data of established quality, which may then be used for
real‑time operations and for a databank. However, a by‑product of this process should be the
compilation of information about the errors that were found. It is good practice to establish at
the first or subsequent data‑processing point a system for immediate feedback to the origin
of the data if errors are found, and to compile a record for use by the network manager in
performance monitoring, as discussed below. This function is best performed at the regional
level, where there is ready access to the field stations.
The detailed procedures described in WMO (1993a) are a guide to controlling the quality control
of data for international exchange, under the recommendations of WMO (2017a).
If quality is to be maintained, it is absolutely essential that errors be tracked back to their source,
with some kind of corrective action. For data from staffed stations this is very effectively done in
near real time, not only because the data may be corrected, but also to identify the reason for the
error and prevent it from recurring.
CHAPTER 1. QUALITY MANAGEMENT 19
It is good practice to assign a person at a data centre or other operational centre with the
responsibility for maintaining near‑real‑time communication and effective working relations with
the field stations, to be used whenever errors in the data are identified.
(a) Advice from data centres should be used to record the numbers and types of errors
detected by quality‑control procedures;
(b) Data from each station should be compiled into synoptic and time‑section sets. Such sets
should be used to identify systematic differences from neighbouring stations, both in
spatial fields and in comparative time series. It is useful to derive statistics of the mean and
the scatter of the differences. Graphical methods are effective for these purposes;
(c) Reports should be obtained from field stations about equipment faults, or other aspects of
performance.
These types of records are very effective in identifying systematic faults in performance and
in indicating corrective action. They are powerful indicators of many factors that affect the
data, such as exposure or calibration changes, deteriorating equipment, changes in the quality
of consumables or the need for retraining. They are particularly important for maintaining
confidence in automatic equipment.
The results of performance monitoring should be used for feedback to the field stations, which is
important to maintain motivation. The results also indicate when action is necessary to repair or
upgrade the field equipment.
Performance monitoring is a time‑consuming task, to which the network manager must allocate
adequate resources. WMO (1988) describes a system to monitor data from an AWS network,
using a small, dedicated office with staff monitoring real‑time output and advising the network
managers and data users. Miller and Morone (1993) describe a system with similar functions,
in near real time, making use of a mesoscale numerical model for the spatial and temporal tests
on the data.
In the past, observational networks were primarily built to support weather forecasting activities.
Operational quality control was focused mainly on identifying outliers, but rarely incorporated
checks for data homogeneity and continuity of time series. The surge of interest in climate
change, primarily as a result of concerns over increases in greenhouse gases, changed this
situation. Data homogeneity tests have revealed that many of the apparent climate changes can
be attributed to inhomogeneities in time series caused only by operational changes in observing
systems. This section attempts to summarize these causes and presents some guidelines
concerning the necessary information on data, namely, metadata, which should be made
available to support data homogeneity and climate change investigations.
20 GUIDE TO INSTRUMENTS AND METHODS OF OBSERVATION - VOLUME V
The historical survey of changes in radiosondes (WMO, 1993b) illustrates the seriousness of the
problem and is a good example of the careful work that is necessary to eliminate it.
Changes in the surface‑temperature record when manual stations are replaced by AWSs, and
changes in the upper‑air records when radiosondes are changed, are particularly significant
cases of data inhomogeneities. These two cases are now well recognized and can, in principle, be
anticipated and corrected, but performance monitoring can be used to confirm the effectiveness
of corrections, or even to derive them.
1.9.2 Metadata
A metadata database contains initial set‑up information together with updates whenever
changes occur. Major elements include the following:
(i) The operating authority, and the type and purpose of the network;
3
It is necessary to include maps and plans on appropriate scales.
CHAPTER 1. QUALITY MANAGEMENT 21
(vii) Observer;
A useful survey of requirements is given in WMO (1994), with examples of the effects of changes
in observing operations and an explanation of the advantages of good metadata for obtaining
a reliable climate record from discontinuous data. The basic functional elements of a system for
maintaining a metadata database may be summarized as follows:
(a) Standard procedures must be established for collecting overlapping measurements for all
significant changes made in instrumentation, observing practices and sensor siting;
(b) Routine assessments must be made of ongoing calibration, maintenance, and homogeneity
problems for the purpose of taking corrective action, when necessary;
(c) There must be open communication between the data collector and the researcher to
provide feedback mechanisms for recognizing data problems, the correction or at least the
potential for problems, and the improvement of, or addition to, documentation to meet
initially unforeseen user requirements (for example, work groups);
(d) There must be detailed and readily available documentation on the procedures, rationale,
testing, assumptions and known problems involved in the construction of the dataset from
the measurements.
22 GUIDE TO INSTRUMENTS AND METHODS OF OBSERVATION - VOLUME V
These four recommendations would have the effect of providing a data user with enough
metadata to enable manipulation, amalgamation and summarization of the data with minimal
assumptions regarding data quality and homogeneity.
All the factors affecting data quality described in 1.6 are the subject of network management.
In particular, network management must include corrective action in response to the network
performance revealed by quality‑control procedures and performance monitoring.
Networks are defined in WMO (2015), and guidance on network management in general terms
is given in WMO (2010), including the structure and functions of a network management unit.
Network management practices vary widely according to locally established administrative
arrangements.
It is highly desirable to identify a particular person or office as the network manager to whom
operational responsibility is assigned for the impact of the various factors on data quality. Other
specialists who may be responsible for the management and implementation of some of these
factors must collaborate with the network manager and accept responsibility for their effect on
data quality.
The manager should keep under review the procedures and outcomes associated with all of the
factors affecting quality, as discussed in 1.6, including the following considerations:
(a) The quality‑control systems described in 1.1 are operationally essential in any
meteorological network and should receive priority attention by the data users and by the
network management;
(d) Equipment maintenance may be a direct function of the network management unit. If not,
there should be particularly effective collaboration between the network manager and the
office responsible for the equipment;
(e) The administrative arrangements should enable the network manager to take, or arrange
for, corrective action arising from quality‑control procedures, performance monitoring, the
inspection programme, or any other factor affecting quality. One of the most important
other factors is observer training, as described in the present volume, Chapter 5, and the
network manager should be able to influence the content and conduct of courses and how
they are conducted or the prescribed training requirements.
1.10.1 Inspections
It is highly advisable to have a systematic and exhaustive procedure fully documented in the form
of inspections and maintenance handbooks, to be used by the visiting inspectors. Procedures
should include the details of subsequent reporting and follow‑up.
The inspector should attend, in particular, to the following aspects of station operations:
(b) Observing methods: Bad practice can easily occur in observing procedures, and the work of
all observers should be continually reviewed. Uniformity in methods recording and coding
is essential for synoptic and climatological use of the data;
(c) Exposure: Any changes in the surroundings of the station must be documented and
corrected in due course, if practicable. Relocation may be necessary.
Inspections of manual stations also serve the purpose of maintaining the interest and enthusiasm
of the observers. The inspector must be tactful, informative, enthusiastic and able to obtain
willing cooperation.
A prepared form for recording the inspection should be completed for every inspection. It should
include a checklist on the condition and installation of the equipment and on the ability and
competence of the observers. The inspection form may also be used for other administrative
purposes, such as an inventory.
It is most important that all changes identified during the inspection should be permanently
recorded and dated so that a station history can be compiled for subsequent use for climate
studies and other purposes.
An optimum frequency of inspection visits cannot be generally specified, even for one particular
type of station. It depends on the quality of the observers and equipment, the rate at which
the equipment and exposure deteriorates, and changes in the station staff and facilities.
An inspection interval of two years may be acceptable for a well‑established station, and six
months may be appropriate for automatic stations. Some kinds of stations will have special
inspection requirements.
Some equipment maintenance may be performed by the inspector or by the inspection team,
depending on the skills available. In general, there should be an equipment maintenance
programme, as is the case for inspections. This is not discussed here because the requirements
and possible organizations are very diverse.
REFERENCES AND FURTHER READING
Deming, W. E. Out of the Crisis: Quality, Productivity, and Competitive Position; Cambridge University Press:
Cambridge [Cambridgeshire], 1986.
International Organization for Standardization (ISO). Quality management systems – Fundamentals and
vocabulary; ISO 9000:2015; Geneva, 2015. https://www.iso.org/standard/45481.html
International Organization for Standardization (ISO). Quality management systems – Requirements;
ISO 9001:2015; Geneva, 2015. https://www.iso.org/standard/62085.html
International Organization for Standardization (ISO). Quality management – Quality of an organization.
Guidance to achieve sustained success; ISO 9004:2018; Geneva, 2018. https://www.iso.org/
standard/70397.html
International Organization for Standardization (ISO). Guidelines for auditing management systems;
ISO 19011:2018; Geneva, 2018. https://www.iso.org/standard/70017.html
International Organization for Standardization (ISO)/International Electrotechnical Commission (IEC).
General requirements for the competence of testing and calibration laboratories; ISO/IEC 17025:2017;
Geneva, 2017. https://www.iso.org/standard/66912.html
International Organization for Standardization (ISO)/International Electrotechnical Commission (IEC).
Information technology – Service management – Part 1: Service management system requirements;
ISO/IEC 20000-1:2018; Geneva, 2018. https://www.iso.org/standard/70636.html
International Organization for Standardization (ISO)/International Electrotechnical Commission
(IEC). Information technology – Service management – Part 2: Guidance on the application of
service management systems; ISO/IEC 20000-2:2019; Geneva, 2019. https://www.iso.org/
standard/72120.html
Kaplan, R. S.; Norton, D. P. The Balanced Scorecard: Translating Strategy into Action, Harvard Business School
Press: Boston, Mass., 1996. http://www.untag-smd.ac.id/files/Perpustakaan_ Digital_1/
BALANCED%20SCORECARD%20The%20balanced%20scorecard%20translating%20strategy
%20into%20action%20%5B1996%5D.pdf.
Miller, P.A.; L.L. Morone. Real time quality control of hourly reports from the automated surface
observing system. Preprints of the Eighth Symposium on Meteorological Observations and
Instrumentation. American Meteorological Society 1993, 373-378.
M. Field; J. Nash. Practical experience of the operation of quality evaluation programmes for automated
surface observations both on land and over the sea. Papers Presented at the WMO Technical
Conference on Instruments and Methods of Observation (TECO-1988) (WMO/TDNo. 222),
Report No. 33; World Meteorological Organization (WMO): Geneva., 1988.
World Meteorological Organization (WMO) 1993a. Guide on the Global Data-Processing System
(WMO-No. 305). Geneva, 1993.
World Meteorological Organization (WMO) 1993b. D.J. Gaffen. Historical changes in radiosonde instruments
and practices: final report (WMO/TD-No. 541), Report No. 50; World Meteorological
Organization (WMO): Geneva, 1993b.
K.D. Hadeen; N.B. Guttman. Homogeneity of data and the climate record. Papers Presented at the WMO
Technical Conference on Instruments and Methods of Observation (TECO-94) (WMO/TD-No. 588),
Report No. 57; World Meteorological Organization (WMO): Geneva, 1994.
World Meteorological Organization (WMO) 2005a. WMO Quality Management Framework (QMF): first WMO
Technical Report (WMO/TD-No. 1268) (revised edition); Geneva, 2005.
World Meteorological Organization (WMO) 2005b. Guidelines on Quality Management Procedures and
Practices for Public Weather Services (WMO/TD-No. 1256) 2005. PWS No. 11; Geneva, 2005.
World Meteorological Organization (WMO) 2010. Guide to the Global Observing System (WMO-No. 488).
Geneva, 2010 (updated in 2017).
World Meteorological Organization (WMO) 2015. Manual on the Global Observing System (WMO-No. 544),
Volume I. Geneva, 2015 (updated in 2017).
World Meteorological Organization (WMO) 2017a. Manual on the Global Data-processing and Forecasting
System (WMO-No. 485), Volume I. Geneva, 2017.
World Meteorological Organization (WMO) 2017b. WIGOS Metadata Standard (WMO-No. 1192).
Geneva, 2017.
CHAPTER 2. SAMPLING METEOROLOGICAL VARIABLES
2.1 GENERAL
The purpose of this chapter is to give an introduction to this complex subject, for non‑experts
who need enough knowledge to develop a general understanding of the issues and to acquire a
perspective of the importance of the techniques.
Atmospheric variables such as wind speed, temperature, pressure and humidity are functions
of four dimensions – two horizontal, one vertical, and one temporal. They vary irregularly in all
four, and the purpose of the study of sampling is to define practical measurement procedures to
obtain representative observations with acceptable uncertainties in the estimations of mean and
variability.
(a) At an elementary level, the basic meteorological problem of obtaining a mean value of
a fluctuating quantity representative of a stated sampling interval at a given time, using
instrument systems with long response times compared with the fluctuations, can be
discussed. At the simplest level, this involves consideration of the statistics of a set of
measurements, and of the response time of instruments and electronic circuits;
(b) The problem can be considered more precisely by making use of the theory of time‑series
analysis, the concept of the spectrum of fluctuations, and the behaviour of filters. These
topics are necessary for the more complex problem of using relatively fast‑response
instruments to obtain satisfactory measurements of the mean or the spectrum of a rapidly
varying quantity, wind being the prime example.
It is therefore convenient to begin with a discussion of time series, spectra and filters in 2.2
and 2.3. Section 2.4 gives practical advice on sampling. The discussion here, for the most part,
assumes digital techniques and automatic processing.
There are many textbooks available to give the necessary background for the design of sampling
systems or the study of sampled data. See, for example, Bendat and Piersol (1986) or Otnes and
Enochson (1978). Other useful texts include Pasquill and Smith (1983), Stearns and Hush (1990),
Kulhánek (1976), and Jenkins and Watts (1968).
26 GUIDE TO INSTRUMENTS AND METHODS OF OBSERVATION - VOLUME V
2.1.1 Definitions
For the purposes of this chapter the following definitions are used:
Sample. A single measurement, typically one of a series of spot readings of a sensor system. Note
that this differs from the usual meaning in statistics of a set of numbers or measurements
which is part of a population.
An observation. The result of the sampling process, being the quantity reported or recorded
(often also called a measurement). In the context of time‑series analysis, an observation is
derived from a number of samples.
A measurement. The ISO definition is a “set of operations having the object of determining the
value of a quantity”. In common usage, the term may be used to mean the value of either a
sample or an observation.
Sampling time or observation period. The length of the time over which one observation is
made, during which a number of individual samples are taken.
Sampling function or weighting function. In its simplest definition, an algorithm for averaging
or filtering the individual samples.
Sampling frequency. The frequency at which samples are taken. The sample spacing is the time
between samples.
Smoothing. The process of attenuating the high frequency components of the spectrum
without significantly affecting the lower frequencies. This is usually done to remove noise
(random errors and fluctuations not relevant for the application).
Filter. A device for attenuating or selecting any chosen frequencies. Smoothing is performed by
a low‑pass filter, and the terms smoothing and filtering are often used interchangeably in this
sense. However, there are also high‑pass and band‑pass filters. Filtering may be a property of
the instrument, such as inertia, or it may be performed electronically or numerically.
Sampled observations are made at a limited rate and for a limited time interval over a limited
area. In practice, observations should be designed to be sufficiently frequent to be representative
of the unsampled parts of the (continuous) variable, and are often taken as being representative
of a longer time interval and larger area.
The user of an observation expects it to be representative, or typical, of an area and time, and
of an interval of time. This area, for example, may be “the airport” or that area within a radius
of several kilometres and within easy view of a human observer. The time is the time at which
the report was made or the message transmitted, and the interval is an agreed quantity,
often 1, 2 or 10 min.
A typical example of sampling and time averaging is the measurement of temperature each
minute (the samples), the computation of a 10 min average (the sampling interval and the
sampling function), and the transmission of this average (the observation) in a synoptic
report every 3 h. When these observations are collected over a period from the same site, they
themselves become samples in a new time sequence with a 3 h spacing. When collected from
a large number of sites, these observations also become samples in a spatial sequence. In this
sense, representative observations are also representative samples. In this chapter we discuss the
initial observation.
By applying the mathematical operation known as the Fourier transform, an irregular function
of time (or distance) can be reduced to its spectrum, which is the sum of a large number of
sinusoids, each with its own amplitude, wavelength (or period or frequency) and phase. In
broad contexts, these wavelengths (or frequencies) define “scales” or “scales of motion” of
the atmosphere.
The range of these scales is limited in the atmosphere. At one end of the spectrum, horizontal
scales cannot exceed the circumference of the Earth or about 40 000 km. For meteorological
purposes, vertical scales do not exceed a few tens of kilometres. In the time dimension, however,
the longest scales are climatological and, in principle, unbounded, but in practice the longest
period does not exceed the length of records. At the short end, the viscous dissipation of
turbulent energy into heat sets a lower bound. Close to the surface of the Earth, this bound is at a
wavelength of a few centimetres and increases with height to a few metres in the stratosphere. In
the time dimension, these wavelengths correspond to frequencies of tens of hertz. It is correct to
say that atmospheric variables are bandwidth limited.
S(n)
This section is a layperson’s introduction to the concepts of time‑series analysis which are the
basis for good practice in sampling. In the context of the present Guide, they are particularly
important for the measurement of wind, but the same problems arise for temperature, pressure
and other quantities. They became important for routine meteorological measurements when
automatic measurements were introduced, because frequent fast sampling then became
possible. Serious errors can occur in the estimates of the mean, the extremes and the spectrum if
systems are not designed correctly.
Although measurements of spectra are non‑routine, they have many applications. The spectrum
of wind is important in engineering, atmospheric dispersion, diffusion and dynamics. The
concepts discussed here are also used for quantitative analysis of satellite data (in the horizontal
space dimension) and in climatology and micrometeorology.
(a) An optimum sampling rate can be assessed from consideration of the variability of the
quantity being measured. Estimates of the mean and other statistics of the observations will
have smaller uncertainties with higher sampling frequencies, namely, larger samples;
(b) The Nyquist theorem states that a continuous fluctuating quantity can be precisely
determined by a series of equispaced samples if they are sufficiently close together;
(c) If the sampling frequency is too low, fluctuations at the higher unsampled frequencies
(above the Nyquist frequency, defined in 2.2.1) will affect the estimate of the mean value.
They will also affect the computation of the lower frequencies, and the measured spectrum
will be incorrect. This is known as aliasing. It can cause serious errors if it is not understood
and allowed for in the system design;
(d) Aliasing may be avoided by using a high sampling frequency or by filtering so that a lower,
more convenient sampling frequency can be used;
(e) Filters may be digital or analogue. A sensor with a suitably long response time acts
as a filter.
A full understanding of sampling involves knowledge of power spectra, the Nyquist theorem,
filtering and instrument response. This is a highly specialized subject, requiring understanding
of the characteristics of the sensors used, the way the output of the sensors is conditioned,
processed and logged, the physical properties of the elements being measured, and the purpose
to which the analysed data are to be put. This, in turn, may require expertise in the physics of
the instruments, the theory of electronic or other systems used in conditioning and logging
processes, mathematics, statistics and the meteorology of the phenomena, all of which are well
beyond the scope of this chapter.
It is necessary to consider signals as being either in the time or the frequency domain. The
fundamental idea behind spectral analysis is the concept of Fourier transforms. A function, f(t),
defined between t = 0 and t = τ can be transformed into the sum of a set of sinusoidal functions:
∞
f (t ) = ∑ A j sin ( jωt ) + B j cos ( jωt ) (2.1)
j =0
where ω = 2 π/t. The right‑hand side of the equation is a Fourier series. Aj and Bj are the
amplitudes of the contributions of the components at frequencies nj = jω. This is the basic
CHAPTER 2. SAMPLING METEOROLOGICAL VARIABLES 29
transformation between the time and frequency domains. The Fourier coefficients Aj and Bj relate
directly to the frequency jω and can be associated with the spectral contributions to f(t) at these
frequencies. If the frequency response of an instrument is known – that is, the way in which
it amplifies or attenuates certain frequencies – and if it is also known how these frequencies
contribute to the original signal, the effect of the frequency response on the output signal can
be calculated. The contribution of each frequency is characterized by two parameters. These can
be most conveniently taken as the amplitude and phase of the frequency component. Thus, if
equation 2.1 is expressed in its alternative form:
∞
f (t ) = ∑ α j sin ( jωt + φ j ) (2.2)
j =0
the amplitude and phase associated with each spectral contribution are αj and ϕj. Both can be
affected in sampling and processing.
So far, it has been assumed that the function f(t) is known continuously throughout its range
t = 0 to t = τ. In fact, in most examples this is not the case; the meteorological variable is measured
at discrete points in a time series, which is a series of N samples equally spaced Δt apart during
a specified period τ = (N–1)Δt. The samples are assumed to be taken instantaneously, an
assumption which is strictly not true, as all measuring devices require some time to determine
the value they are measuring. In most cases, this is short compared with the sample spacing Δt.
Even if it is not, the response time of the measuring system can be accommodated in the analysis,
although that will not be addressed here.
When considering the data that would be obtained by sampling a sinusoidal function at times Δt
apart, it can be seen that the highest frequency that can be detected is 1/(2Δt), and that in fact
any higher frequency sinusoid that may be present in the time series is represented in the data as
having a lower frequency. The frequency 1/(2Δt) is called the Nyquist frequency, designated here
as ny. The Nyquist frequency is sometimes called the folding frequency. This terminology comes
from consideration of aliasing of the data. The concept is shown schematically in Figure 2.2.
When a spectral analysis of a time series is made, because of the discrete nature of the data, the
contribution to the estimate at frequency n also contains contributions from higher frequencies,
namely from 2 jny ± n (j = 1 to ∞). One way of visualizing this is to consider the frequency
domain as if it were folded, in a concertina‑like way, at n = 0 and n = ny and so on in steps of ny.
The spectral estimate at each frequency in the range is the sum of all the contributions of those
higher frequencies that overlie it.
The practical effects of aliasing are discussed in 2.4.2. It is potentially a serious problem and
should be considered when designing instrument systems. It can be avoided by minimizing, or
reducing to zero, the strength of the signal at frequencies above ny. There are a couple of ways
of achieving this. First, the system can contain a low‑pass filter that attenuates contributions at
frequencies higher than ny before the signal is digitized. The only disadvantage of this approach
is that the timing and magnitude of rapid changes will not be recorded well, or even at all.
The second approach is to have Δt small enough so that the contributions above the Nyquist
frequency are insignificant. This is possible because the spectra of most meteorological variables
fall off very rapidly at very high frequencies. This second approach will, however, not always be
practicable, as in the example of three‑hourly temperature measurements, where if Δt is of the
order of hours, small scale fluctuations, of the order of minutes or seconds, may have relatively
large spectral ordinates and alias strongly. In this case, the first method may be appropriate.
The spectral density, at least as it is estimated from a time series, is defined as:
( ) (
S n j = A2j + B2j ) n y = α 2j n y (2.3)
L1
(S)
S(n)
K1
(a) a
K2
(b)
L2 b
(c) c
There are a number of ways of approaching the numerical spectral analysis of a time series. The
most obvious is a direct Fourier transform of the time series. In this case, as the series is only of
finite length, there will be only a finite number of frequency components in the transformation.
If there are N terms in the time series, there will be N/2 frequencies resulting from this analysis.
A direct calculation is very laborious, and other methods have been developed. The first
development was by Blackman and Tukey (1958), who related the auto‑correlation function
to estimates of various spectral functions. (The auto‑correlation function r(t) is the correlation
coefficient calculated between terms in the time series separated by a time interval t). This
was appropriate for the low‑powered computing facilities of the 1950s and 1960s, but it has
now been generally superseded by the so‑called fast Fourier transform (FFT), which takes
advantage of the general properties of a digital computer to greatly accelerate the calculations.
The main limitation of the method is that the time series must contain 2k terms, where k is an
integer. In general, this is not a serious problem, as in most instances there are sufficient data to
conveniently organize the series to such a length. Alternatively, some FFT computer programs
can use an arbitrary number of terms and add synthetic data to make them up to 2k.
As the time series is of finite duration (N terms), it represents only a sample of the signal of
interest. Thus, the Fourier coefficients are only an estimate of the true, or population, value.
CHAPTER 2. SAMPLING METEOROLOGICAL VARIABLES 31
Increasingly, the use of the above analyses is an integral part of meteorological systems and
relevant not only to the analysis of data. The exact form of spectra encountered in meteorology
can show a wide range of shapes. As can be imagined, the contributions can be from the
lowest frequencies associated with climate change through annual and seasonal contributions
through synoptic events with periods of days, to diurnal and semi‑diurnal contributions and
local mesoscale events down to turbulence and molecular variations. For most meteorological
applications, including synoptic analysis, the interest is in the range minutes to seconds. The
spectrum at these frequencies will typically decrease very rapidly with frequency. For periods
of less than 1 min, the spectrum often takes values proportional to n–5/3. Thus, there is often
relatively little contribution from frequencies greater than 1 Hz.
where σ 2 is the variance of the quantity being measured. It is often convenient, for analysis, to
express the spectrum in continuous form, so that equation 2.4 becomes:
∞
∫ S ( n ) dn = σ
2
(2.5)
0
It can be seen from equations 2.4 and 2.5 that changes caused to the spectrum, say by the
instrument system, will alter the value of σ 2 and hence the statistical properties of the output
relative to the input. This can be an important consideration in instrument design and
data analysis.
Note also that the left‑hand side of equation 2.5 is the area under the curve in Figure 2.2. That
area, and therefore the variance, is not changed by aliasing if the time series is stationary, that is if
its spectrum does not change from time to time.
Sensors, and the electronic circuits that may be used with them comprising an instrument
system, have response times and filtering characteristics that affect the observations.
No meteorological instrument system, or any instrumental system for that matter, precisely
follows the quantity it is measuring. There is, in general, no simple way of describing the
response of a system, although there are some reasonable approximations to them. The simplest
can be classified as first and second order responses. This refers to the order of the differential
equation that is used to approximate the way the system responds. For a detailed examination of
the concepts that follow, there are many references in physics textbooks and the literature (see
MacCready and Jex, 1964).
In the first order system, such as a simple sensor or the simplest low‑pass filter circuit, the rate
of change of the value recorded by the instrument is directly proportional to the difference
32 GUIDE TO INSTRUMENTS AND METHODS OF OBSERVATION - VOLUME V
between the value registered by the instrument and the true value of the variable. Thus, if the
true value at time t is s(t) and the value measured by the sensor is s0(t), the system is described by
the first order differential equation:
ds0 ( t ) s ( t ) − s0 ( t )
= (2.6)
dt TI
where T I is a constant with the dimension of time, characteristic of the system. A first order
system’s response to a step function is proportional to exp(–t/TI), and T I is observable as the time
taken, after a step change, for the system to reach 63% of the final steady reading. Equation 2.6 is
valid for many sensors, such as thermometers.
A cup anemometer is a first order instrument, with the special property that T I is not constant.
It varies with wind speed. In fact, the parameter s0T I is called the distance constant, because
it is nearly constant. As can be seen in this case, equation 2.6 is no longer a simple first order
equation as it is now non‑linear and consequently presents considerable problems in its solution.
A further problem is that T I also depends on whether the cups are speeding up or slowing
down; that is, whether the right‑hand side is positive or negative. This arises because the drag
coefficient of a cup is lower if the airflow is towards the front rather than towards the back.
The wind vane approximates a second order system because the acceleration of the vane towards
the true wind direction is proportional to the displacement of the vane from the true direction.
This is, of course, the classical description of an oscillator (for example, a pendulum). Vanes, both
naturally and by design, are damped. This occurs because of a resistive force proportional to, and
opposed to, its rate of change. Thus, the differential equation describing the vane’s action is:
d 2φ0 ( t ) d φ0 ( t )
= k1 φ0 ( t ) − φ ( t ) − k2 (2.7)
2 dt
dt
where ϕ is the true wind direction; ϕ 0 is the direction of the wind vane; and k1 and k2 are
constants. The solution to this is a damped oscillation at the natural frequency of the vane
(determined by the constant k1). The damping of course is very important; it is controlled by the
constant k2. If it is too small, the vane will simply oscillate at the natural frequency; if too great,
the vane will not respond to changes in wind direction.
It is instructive to consider how these two systems respond to a step change in their input, as
this is an example of the way in which the instruments respond in the real world. Equations 2.6
and 2.7 can be solved analytically for this input. The responses are shown in Figures 2.3 and 2.4.
Note how in neither case is the real value of the element measured by the system. Also, the
choice of the values of the constants k1 and k2 can have great effect on the outputs.
where the subscripts refer to the input and output spectra. Note that, by virtue of the relationship
in equation 2.5, the variance of the output depends on H(n). H(n) defines the effect of the sensor
as a filter, as discussed in the next section. The ways in which it can be calculated or measured are
discussed in 2.3.
2.2.4 Filters
This section discusses the properties of filters, with examples of the ways in which they can
affect the data.
Filtering is the processing of a time series (either continuous or discrete, namely, sampled)
in such a way that the value assigned at a given time is weighted by the values that occurred
CHAPTER 2. SAMPLING METEOROLOGICAL VARIABLES 33
1.0
0.6
0.4
0.2
Time
Figure 2.3. The response of a first order system to a step function. At time TI the system has
reached 63% of its final value.
2.0
0.1
1.5
0.7
Relative response
1.0
2.0
0.5
pN 2pN
Time
Figure 2.4. The response of a second order system to a step function. pN is the natural period,
related to k1 in equation 2.7, which, for a wind vane, depends on wind speed. The curves
shown are for damping factors with values 0.1 (very lightly damped), 0.7 (critically damped,
optimum for most purposes) and 2.0 (heavily damped). The damping factor is related to k2
in equation 2.7.
34 GUIDE TO INSTRUMENTS AND METHODS OF OBSERVATION - VOLUME V
at other times. In most cases, these times will be adjacent to the given time. For example, in
a discrete time‑series of N samples numbered 0 to N, with value yi, the value of the filtered
observation ӯ i might be defined as:
m
yi = ∑ w j yi + j (2.9)
j = −m
Here there are 2m + 1 terms in the filter, numbered by the dummy variable j from –m to +m,
and ӯ i is centred at j = 0. Some data are rejected at the beginning and end of the sampling time.
wj is commonly referred to as a weighting function and typically:
m
∑ wj = 1 (2.10)
j = −m
so that at least the average value of the filtered series will have the same value as the original one.
The above example uses digital filtering. Similar effects can be obtained using electronics
(for example, through a resistor and capacitor circuit) or through the characteristics of the sensor
(for example, as in the case of the anemometer, discussed earlier). Whether digital or analogue, a
filter is characterized by H(n). If digital, H(n) can be calculated; if analogue, it can be obtained by
the methods described in 2.3.
For example, compare a first order system with a response time of T I, and a “box car” filter of
length Ts on a discrete time‑series taken from a sensor with much faster response. The forms of
these two filters are shown in Figure 2.5. In the first, it is as though the instrument has a memory
which is strongest at the present instant, but falls off exponentially the further in the past the
data goes. The box car filter has all weights of equal magnitude for the period Ts, and zero
beyond that. The frequency response functions, H(n), for these two are shown in Figure 2.6.
In the figure, the frequencies have been scaled to show the similarity of the two response
functions. It shows that an instrument with a response time of, say, 1 s has approximately the
Box car
weighting function
Weighting factor w
Exponential
weighting function
Time
Figure 2.5. The weighting factors for a first order (exponential) weighting function and a box
car weighting function. For the box car Ta is Ts, the sampling time, and w = 1/N. For the first
order function Ta is TI, the time constant of the filter, and w(t) = (1/TI) exp (–t/TI).
CHAPTER 2. SAMPLING METEOROLOGICAL VARIABLES 35
1.0
0.8
Exponential filter Box car filter
0.6
H(n)
0.4
0.2
Figure 2.6. Frequency response functions for a first order (exponential) weighting function
and a box car weighting function. The frequency is normalized for the first order filter by TI,
the time constant, and for the box car filter by Ts, the sampling time.
same effect on an input as a box car filter applied over 4 s. However, it should be noted that a
box car filter, which is computed numerically, does not behave simply. It does not remove all the
higher frequencies beyond the Nyquist frequency, and can only be used validly if the spectrum
falls off rapidly above ny. Note that the box car filter shown in Figure 2.6 is an analytical solution
for w as a continuous function; if the number of samples in the filter is small, the cut‑off is less
sharp and the unwanted higher frequency peaks are larger.
See Acheson (1968) for practical advice on box car and exponential filtering, and a comparison of
their effects.
A response function of a second order system is given in Figure 2.7, for a wind vane in this case,
showing how damping acts as a band‑pass filter.
It can be seen that the processing of signals by systems can have profound effects on the data
output and must be expertly done.
Among the effects of filters is the way in which they can change the statistical information of the
data. One of these was touched on earlier and illustrated in equations 2.5 and 2.8. Equation 2.5
shows how the integral of the spectrum over all frequencies gives the variance of the time series,
while equation 2.8 shows how filtering, by virtue of the effect of the transfer function, will
change the measured spectrum. Note that the variance is not always decreased by filtering. For
example, in certain cases, for a second order system the transfer function will amplify parts of the
spectrum and possibly increase the variance, as shown in Figure 2.7.
To give a further example, if the distribution is Gaussian, the variance is a useful parameter. If
it were decreased by filtering, a user of the data would underestimate the departure from the
mean of events occurring with given probabilities or return periods.
Also, the design of the digital filter can have unwanted or unexpected effects. If Figure 2.6 is
examined it can be seen that the response function for the box car filter has a series of maxima at
36 GUIDE TO INSTRUMENTS AND METHODS OF OBSERVATION - VOLUME V
0.1
4
3
H(n)
2
0.7
2.0
1 2 3
n/nN
Figure 2.7. Frequency response functions for a second order system, such as a wind vane.
The frequency is normalized by nN, the natural frequency, which depends on wind speed.
The curves shown are for damping factors with values 0.1 (very lightly damped), 0.7 (critically
damped, optimum for most purposes) and 2.0 (heavily damped).
frequencies above where it first becomes zero. This will give the filtered data a small periodicity
at these frequencies. In this case, the effect will be minimal as the maxima are small. However, for
some filter designs quite significant maxima can be introduced. As a rule of thumb, the smaller
the number of weights, the greater the problem. In some instances, periodicities have been
claimed in data that only existed because the data had been filtered.
An issue related to the concept of filters is the length of the sample. This can be illustrated by
noting that, if the length of record is of duration T, contributions to the variability of the data at
frequencies below 1/T will not be possible. It can be shown that a finite record length has the
effect of a high‑pass filter. As for the low‑pass filters discussed above, a high‑pass filter will also
have an impact on the statistics of the output data.
The filtering characteristics of a sensor or an electronic circuit, or the system that they comprise,
must be known to determine the appropriate sampling frequency for the time series that
the system produces. The procedure is to measure the transfer or response function H(n) in
equation 2.8.
The transfer function can be obtained in at least three ways – by direct measurement, calculation
and estimation.
Response can be directly measured using at least two methods. In the first method a known
change, such as a step function, is applied to the sensor or filter and its response time measured;
H(n) can then be calculated. In the second method, the output of the sensor is compared to
another, much faster sensor. The first method is more commonly used than the second.
CHAPTER 2. SAMPLING METEOROLOGICAL VARIABLES 37
A simple example of how to determine the response of a sensor to a known input is to measure
the distance constant of a rotating‑cup or propeller anemometer. In this example, the known
input is a step function. The anemometer is placed in a constant velocity air‑stream, prevented
from rotating, then released, and its output recorded. The time taken by the output to increase
from zero to 63% of its final or equilibrium speed in the air‑stream is the time “constant”
(see 2.2.3).
If another sensor, which responds much more rapidly than the one whose response is to be
determined, is available, then good approximations of both the input and output can be
measured and compared. The easiest device to use to perform the comparison is probably a
modern, two‑channel digital spectrum analyser. The output of the fast‑response sensor is input
to one channel, the output of the sensor being tested to the other channel, and the transfer
function automatically displayed. The transfer function is a direct description of the sensor as
a filter. If the device whose response is to be determined is an electronic circuit, generating
a known or even truly random input is much easier than finding a much faster sensor. Again,
a modern, two‑channel digital spectrum analyser is probably most convenient, but other
electronic test instruments can be used.
This is the approach described in 2.2.3. If enough is known about the physics of a sensor/filter,
the response to a large variety of inputs may be determined by either analytic or numerical
solution. Both the response to specific inputs, such as a step function, and the transfer function
can be calculated. If the sensor or circuit is linear (described by a linear differential equation),
the transfer function is a complete description, in that it describes the amplitude and phase
responses as a function of frequency, in other words, as a filter. Considering response as a
function of frequency is not always convenient, but the transfer function has a Fourier transform
counterpart, the impulse response function, which makes interpretation of response as a
function of time much easier. This is illustrated in Figures 2.3 and 2.4 which represent response as
a function of time.
If obtainable, analytic solutions are preferable because they clearly show the dependence upon
the various parameters.
If the transfer functions of a transducer and each following circuit are known, their product is the
transfer function of the entire system. If, as is usually the case, the transfer functions are low‑pass
filters, the aggregate transfer function is a low‑pass filter whose cut‑off frequency is less than that
of any of the individual filters.
If one of the individual cut‑off frequencies is much less than any of the others, then the cut‑off
frequency of the aggregate is only slightly smaller.
Since the cut‑off frequency of a low‑pass filter is approximately the inverse of its time constant, it
follows that, if one of the individual time constants is much larger than any of the others, the time
constant of the aggregate is only slightly larger.
2.4 SAMPLING
Figure 2.8 schematically illustrates a typical sensor and sampling circuit. When exposed to the
atmosphere, some property of the transducer changes with an atmospheric variable such as
temperature, pressure, wind speed or direction, or humidity and converts that variable into a
useful signal, usually electrical. Signal conditioning circuits commonly perform functions such as
38 GUIDE TO INSTRUMENTS AND METHODS OF OBSERVATION - VOLUME V
converting transducer output to a voltage, amplifying, linearizing, offsetting and smoothing. The
low‑pass filter finalizes the sensor output for the sample‑and‑hold input. The sample‑and‑hold
and the analogue‑to‑digital converter produce the samples from which the observation is
computed in the processor.
It should be noted that the smoothing performed at the signal conditioning stage for
engineering reasons, to remove spikes and to stabilize the electronics, is performed by a low‑pass
filter; it reduces the response time of the sensor and removes high frequencies which may be
of interest. Its effect should be explicitly understood by the designer and user, and its cut‑off
frequency should be as high as practicable.
So‑called “smart sensors”, those with microprocessors, may incorporate all the functions shown.
The signal conditioning circuitry may not be found in all sensors, or may be combined with other
circuitry. In other cases, such as with a rotating‑cup or propeller anemometer, it may be easy
to speak only of a sensor because it is awkward to distinguish a transducer. In the few cases for
which a transducer or sensor output is a signal whose frequency varies with the atmospheric
variable being measured, the sample‑and‑hold and the analogue‑to‑digital converter may be
replaced by a counter. But these are not important details. The important element in the design
is to ensure that the sequence of samples adequately represents the significant changes in the
atmospheric variable being measured.
The first condition imposed upon the devices shown in Figure 2.8 is that the sensor must
respond quickly enough to follow the atmospheric fluctuations which are to be described in the
observation. If the observation is to be a 1, 2 or 10 min average, this is not a very demanding
requirement. On the other hand, if the observation is to be that of a feature of turbulence, such as
peak wind gust, care must be taken when selecting a sensor.
Atmosphere
SENSOR/TRANSDUCER
LOW-PASS FILTER
SAMPLE-AND-HOLD
CLOCK
ANALOGUE-TO-DIGITAL CONVERTER
PROCESSOR
Observation
The second condition imposed upon the devices shown in Figure 2.8 is that the sample‑and‑hold
and the analogue‑to‑digital converter must provide enough samples to make a good
observation. The accuracy demanded of meteorological observations usually challenges the
sensor, not the electronic sampling technology. However, the sensor and the sampling must be
matched to avoid aliasing. If the sampling rate is limited for technical reasons, the sensor/filter
system must be designed to remove the frequencies that cannot be represented.
If the sensor has a suitable response function, the low‑pass filter may be omitted, included
only as insurance, or may be included because it improves the quality of the signal input to the
sample‑and‑hold. As examples, such a filter may be included to eliminate noise pick‑up at the
end of a long cable or to further smooth the sensor output. Clearly, this circuit must also respond
quickly enough to follow the atmospheric fluctuations of interest.
For most meteorological and climatological applications, observations are required at intervals
of 30 min to 24 hours, and each observation is made with a sampling time of the order of
1 to 10 min. Volume I, Chapter 1, Annex 1.A gives a recent statement of requirements for
these purposes.
A common practice for routine observations is to take one spot reading of the sensor (such as a
thermometer) and rely on its time constant to provide an approximately correct sampling time.
This amounts to using an exponential filter (Figure 2.6). AWSs commonly use faster sensors, and
several spot readings must be taken and processed to obtain an average (box car filter) or other
appropriately weighted mean.
(a) Samples taken to compute averages should be obtained at equispaced time intervals which:
(ii) Do not exceed the time constant of an analogue low‑pass filter following the
linearized output of a fast‑response sensor; or
(iii) Are sufficient in number to ensure that the uncertainty of the average of the samples
is reduced to an acceptable level, for example, smaller than the required accuracy of
the average;
(b) Samples to be used in estimating extremes of fluctuations, such as wind gusts, should be
taken at rates at least four times as often as specified in (i) or (ii) above.
For obtaining averages, somewhat faster sampling rates than (i) and (ii), such as twice per time
constant, are often advocated and practised.
Criteria (i) and (ii) derive from consideration of the Nyquist frequency. If the sample spacing
Δt ≤ T I, the sampling frequency n ≥ 1/TI and nT I ≥ 1. It can be seen from the exponential curve in
Figure 2.6 that this removes the higher frequencies and prevents aliasing. If Δt = T I, ny = 1/2TI and
the data will be aliased only by the spectral energy at frequencies at nT I = 2 and beyond, that is
where the fluctuations have periods of less than 0.5T I.
Criteria (i) and (ii) are used for automatic sampling. The statistical criterion in (iii) is more
applicable to the much lower sampling rates in manual observations. The uncertainty of the
mean is inversely proportional to the square root of the number of observations, and its value can
be determined from the statistics of the quantity.
1
As adopted by CIMO at its tenth session (1989) through Recommendation 3 (CIMO‑X).
40 GUIDE TO INSTRUMENTS AND METHODS OF OBSERVATION - VOLUME V
Criterion (b) emphasizes the need for high sampling frequencies, or more precisely, small
time‑constants, to measure gusts. Recorded gusts are smoothed by the instrument response, and
the recorded maximum will be averaged over several times the time constant.
The effect of aliasing on estimates of the mean can be seen very simply by considering what
happens when the frequency of the wave being measured is the same as the sampling
frequency, or a multiple thereof. The derived mean will depend on the timing of the sampling. A
sample obtained once per day at a fixed time will not provide a good estimate of mean monthly
temperature.
For a slightly more complex illustration of aliasing, consider a time series of three‑hourly
observations of temperature using an ordinary thermometer. If temperature changes smoothly
with time, as it usually does, the daily average computed from eight samples is acceptably stable.
However, if a mesoscale event (a thunderstorm) has occurred which reduced the temperature
by many degrees for 30 min, the computed average is wrong. The reliability of daily averages
depends on the usual weakness of the spectrum in the mesoscale and higher frequencies.
However, the occurrence of a higher‑frequency event (the thunderstorm) aliases the data,
affecting the computation of the mean, the standard deviation and other measures of dispersion,
and the spectrum.
The matter of sampling rate may be discussed also in terms of Figure 2.8. The argument in 2.2.1
was that, for the measurement of spectra, the sampling rate, which determines the Nyquist
frequency, should be chosen so that the spectrum of fluctuations above the Nyquist frequency is
too weak to affect the computed spectrum. This is achieved if the sampling rate set by the clock
in Figure 2.8 is at least twice the highest frequency of significant amplitude in the input signal to
the sample‑and‑hold.
The wording “highest frequency of significant amplitude” used above is vague. It is difficult to
find a rigorous definition because signals are never truly bandwidth limited. However, it is not
difficult to ensure that the amplitude of signal fluctuations decreases rapidly with increasing
frequency, and that the root‑mean‑square amplitude of fluctuations above a given frequency is
either small in comparison with the quantization noise of the analogue‑to‑digital converter, small
in comparison with an acceptable error or noise level in the samples, or contributes negligibly to
total error or noise in the observation.
Section 2.3 discussed the characteristics of sensors and circuits which can be chosen or adjusted
to ensure that the amplitude of signal fluctuations decreases rapidly with increasing frequency.
Most transducers, by virtue of their inability to respond to rapid (high‑frequency) atmospheric
fluctuations and their ability to replicate faithfully slow (low‑frequency) changes, are also
low‑pass filters. By definition, low‑pass filters limit the bandwidth and, by Nyquist’s theorem,
also limit the sampling rate that is necessary to reproduce the filter output accurately. For
example, if there are real variations in the atmosphere with periods down to 100 ms, the Nyquist
sampling frequency would be 1 per 50 ms, which is technically demanding. However, if they
are seen through a sensor and filter which respond much more slowly, for example with a 10 s
time constant, the Nyquist sampling rate would be 1 sample per 5 s, which is much easier and
cheaper, and preferable if measurements of the high frequencies are not required.
Many data quality control techniques of use in AWSs depend upon the temporal consistency,
or persistence, of the data for their effectiveness. As a very simple example, two hypothetical
quality‑control algorithms for pressure measurements at AWSs should be considered. Samples
are taken every 10 s, and 1 min averages computed each minute. It is assumed that atmospheric
pressure only rarely, if ever, changes at a rate exceeding 1 hPa per minute.
The first algorithm rejects the average if it differs from the previous one by more than 1 hPa. This
would not make good use of the available data. It allows a single sample with as much as a 6 hPa
error to pass undetected and to introduce a 1 hPa error in an observation.
CHAPTER 2. SAMPLING METEOROLOGICAL VARIABLES 41
The second algorithm rejects a sample if it differs from the previous one by more than 1 hPa. In
this case, an average contains no error larger than about 0.16 (1/6) hPa. In fact, if the assumption
is correct that atmospheric pressure only rarely changes at a rate exceeding 1 hPa per minute, the
accept/reject criteria on adjacent samples could be tightened to 0.16 hPa and error in the average
could be reduced even more.
The point of the example is that data quality control procedures that depend upon temporal
consistency (correlation) for their effectiveness are best applied to data of high temporal
resolution (sampling rate). At the high frequency end of the spectrum in the sensor/filter output,
correlation between adjacent samples increases with increasing sampling rate until the Nyquist
frequency is reached, after which no further increase in correlation occurs.
Up to this point in the discussion, nothing has been said which would discourage using a
sensor/filter with a time constant as long as the averaging period required for the observation
is taken as a single sample to use as the observation. Although this would be minimal in its
demands upon the digital subsystem, there is another consideration needed for effective data
quality control. Observations can be grouped into three categories as follows:
(a) Accurate (observations with errors less than or equal to a specified value);
(c) Missing.
There are two reasons for data quality control, namely, to minimize the number of inaccurate
observations and to minimize the number of missing observations. Both purposes are served
by ensuring that each observation is computed from a reasonably large number of data
quality‑controlled samples. In this way, samples with large spurious errors can be isolated and
excluded, and the computation can still proceed, uncontaminated by that sample.
REFERENCES AND FURTHER READING
3.1 GENERAL
This chapter discusses in general terms the procedures for processing and/or converting data
obtained directly from instruments into data suitable for meteorological users, in particular for
exchange between countries. Formal regulations for the reduction of data to be exchanged
internationally have been prescribed by WMO, and are laid down in WMO (2015). Volume I,
Chapter 1, contains some relevant advice and definitions.
3.1.1 Definitions
Level I data, in general, are instrument readings expressed in appropriate physical units, and
referred to with geographical coordinates. They require conversion to the normal meteorological
variables (identified in Volume I, Chapter 1). Level I data themselves are in many cases obtained
from the processing of electrical signals such as voltages, referred to as raw data. Examples of
these data are satellite radiances and water‑vapour pressure.
The data recognized as meteorological variables are Level II data. They may be obtained directly
from instruments (as is the case for many kinds of simple instruments) or derived from Level I
data. For example, a sensor cannot measure visibility, which is a Level II quantity; instead, sensors
measure the extinction coefficient, which is a Level I quantity.
Level III data are those contained in internally consistent datasets, generally in grid‑point form.
They are not within the scope of the present Guide.
Observing stations throughout the world routinely produce frequent observations in standard
formats for exchanging high‑quality information obtained by uniform observing techniques,
despite the different types of sensors in use throughout the world, or even within nations.
To accomplish this, very considerable resources have been devoted over very many years to
standardize content, quality and format. As automated observation of the atmosphere becomes
more prevalent, it becomes even more important to preserve this standardization and develop
additional standards for the conversion of raw data into Level I data, and raw and Level I data
into Level II data.
The role of a transducer is to sense an atmospheric variable and convert it quantitatively into a
useful signal. However, transducers may have secondary responses to the environment, such as
temperature‑dependent calibrations, and their outputs are subject to a variety of errors, such as
drift and noise. After proper sampling by a data‑acquisition system, the output signal must be
scaled and linearized according to the total system calibration and then filtered or averaged. At
this stage, or earlier, it becomes raw data. The data must then be converted to measurements of
the physical quantities to which the sensor responds, which are Level I data or may be Level II
44 GUIDE TO INSTRUMENTS AND METHODS OF OBSERVATION - VOLUME V
data if no further conversion is necessary. For some applications, additional variables must be
derived. At various stages in the process the data may be corrected for extraneous effects, such
as exposure, and may be subjected to quality control.
Data from conventional weather stations and AWSs must, therefore, be subjected to many
operations before they can be used. The whole process is known as data reduction and consists
of the execution of a number of functions, comprising some or all of the following:
The order in which these functions are executed is only approximately sequential. Of course, the
first and the last function listed above should always be performed first and last. Linearization
may immediately follow or be inherent in the transducer, but it must precede the extraction of an
average. Specific quality control and the application of corrections could take place at different
levels of the data‑reduction process. Depending on the application, stations can operate in a
diminished capacity without incorporating all of these functions.
In the context of the present Guide, the important functions in the data‑reduction process are
the selection of appropriate sampling procedures, the application of calibration information,
linearization when required, filtering and/or averaging, the derivation of related variables,
the application of corrections, quality control, and the compilation of metadata. These are the
topics addressed in this chapter. More explicit information on quality management is given
in the present volume, Chapter 1, and on sampling, filtering and averaging in the present
volume, Chapter 2.
Once reduced, the data must be made available through coding, transmission and receipt,
display, and archiving, which are the topics of other WMO Manuals and Guides. An observing
system is not complete unless it is connected to other systems that deliver the data to the
users. The quality of the data is determined by the weakest link. At every stage, quality control
must be applied.
Much of the existing technology and standardized manual techniques for data reduction
can also be used by AWSs, which, however, make particular demands. AWSs include various
sensors, standard computations for deriving elements of messages, and the message format
CHAPTER 3. DATA REDUCTION 45
itself. Not all sensors interface easily with automated equipment. Analytic expressions for
computations embodied in tables must be recovered or discovered. The rules for encoding
messages must be expressed in computer languages with degrees of precision, completeness
and unambiguousness not demanded by natural language instructions prepared for human
observers. Furthermore, some human functions, such as the identification of cloud types, cannot
be automated using either current or foreseeable technologies.
Data acquisition and data‑processing software for AWSs are discussed at some length in
Volume III, Chapter 1, to an extent which is sufficiently general for any application of electrical
transducers in meteorology. Some general considerations and specific examples of the design of
algorithms for synoptic AWSs are given in WMO (1987).
In processing meteorological data there is usually one correct procedure, algorithm or approach,
and there may be many approximations ranging in validity from good to useless. Experience
strongly suggests that the correct approach is usually the most efficient in the long term. It is
direct, requires a minimum of qualifications, and, once implemented, needs no further attention.
Accordingly, the subsequent paragraphs are largely limited to the single correct approach, as far
as exact solutions exist, to the problem under consideration.
3.2 SAMPLING
See the present volume, Chapter 2 for a full discussion of sampling. The following is a summary of
the main outcomes.
It should be recognized that atmospheric variables fluctuate rapidly and randomly because
of ever‑present turbulence, and that transducer outputs are not faithful reproductions of
atmospheric variables because of their imperfect dynamic characteristics, such as limited ability
to respond to rapid changes. Transducers generally need equipment to amplify or protect their
outputs and/or to convert one form of output to another, such as resistance to voltage. The
circuitry used to accomplish this may also smooth or low‑pass filter the signal. There is a cut‑off
frequency above which no significant fluctuations occur because none exist in the atmosphere
and/or the transducer or signal conditioning circuitry has removed them.
An important design consideration is how often the transducer output should be sampled. The
definitive answer is: at an equispaced rate at least twice the cut‑off frequency of the transducer
output signal. However, a simpler and equivalent rule usually suffices: the sampling interval
should not exceed the largest of the time constants of all the devices and circuitry preceding
the acquisition system. If the sampling rate is less than twice the cut‑off frequency, unnecessary
errors occur in the variance of the data and in all derived quantities and statistics. While these
increases may be acceptable in particular cases, in others they are not. Proper sampling always
ensures minimum variance.
Good design may call for incorporating a low‑pass filter, with a time constant about equal the
sampling interval of the data‑acquisition system. It is also a precautionary measure to minimize
the effects of noise, especially 50 or 60 Hz pick‑up from power mains by cables connecting
sensors to processors and leakage through power supplies.
The WMO regulations (WMO, 2015) prescribe that stations be equipped with properly
calibrated instruments and that adequate observational and measuring techniques are followed
to ensure that the measurements are accurate enough to meet the needs of the relevant
meteorological disciplines. The conversion of raw data from instruments into the corresponding
meteorological variables is achieved by means of calibration functions. The proper application
of calibration functions and any other systematic corrections are most critical for obtaining data
that meet expressed accuracy requirements.
46 GUIDE TO INSTRUMENTS AND METHODS OF OBSERVATION - VOLUME V
A description of the calibration procedures and systematic corrections associated with each of
the basic meteorological variables is contained in each of the respective chapters in Volume I.
3.4 LINEARIZATION
If the transducer output is not exactly proportional to the quantity being measured, the signal
must be linearized, making use of the instrument’s calibration. This must be carried out before
the signal is filtered or averaged. The sequence of operations “average then linearize” produces
different results from the sequence “linearize then average” when the signal is not constant
throughout the averaging period.
(a) Many transducers are inherently nonlinear, namely, their output is not proportional to the
measured atmospheric variable. A thermistor is a simple example;
(b) Although a sensor may incorporate linear transducers, the variables measured may not
be linearly related to the atmospheric variable of interest. For example, the photodetector
and shaft‑angle transducer of a rotating beam ceilometer are linear devices, but the
ceilometer output signal (backscattered light intensity as a function of angle) is non‑linear
in cloud height;
(c) The conversion from Level I to Level II may not be linear. For example, extinction coefficient,
not visibility or transmittance, is the proper variable to average in order to produce
estimates of average visibility.
In the first of these cases, a polynomial calibration function is often used. If so, it is highly
desirable to have standardized sensors with uniform calibration coefficients to avoid the
problems that arise when interchanging sensors in the field. In the other two cases, an analytic
function which describes the behaviour of the transducer is usually appropriate.
3.5 AVERAGING
The natural small‑scale variability of the atmosphere makes smoothing or averaging necessary
for obtaining representative observations and compatibility of data from different instruments.
For international exchange and for many operational applications, the reported measurement
must be representative of the previous 2 or 10 min for wind, and, by convention, of 1 to 10 min
for other quantities. The 1 min practice arises in part from the fact that some conventional
meteorological sensors have a response of the order of 1 min and a single reading is notionally
a 1 min average or smoothed value. If the response time of the instrument is much faster, it is
CHAPTER 3. DATA REDUCTION 47
necessary to take samples and filter or average them. This is the topic of the present volume,
Chapter 2. See Volume I, Chapter 1, Annex 1.A), for the requirements of the averaging times
typical of operational meteorological instrument systems.
Two types of averaging or smoothing are commonly used, namely, arithmetic and exponential.
The arithmetic average conforms with the normal meaning of average and is readily
implemented digitally; this is the box car filter described in the present volume, Chapter 2.
An exponential average is the output of the simplest low‑pass filter representing the simplest
response of a sensor to atmospheric fluctuations, and it is more convenient to implement in
analogue circuitry than the arithmetic average. When the time constant of a simple filter is
approximately half the sampling time over which an average is being calculated, the arithmetic
and exponential smoothed values are practically indistinguishable (see the present volume,
Chapter 2, and also Acheson, 1968).
The outputs of fast‑response sensors vary rapidly thus necessitating high sampling rates for
optimal (minimum uncertainty) averaging. To reduce the required sampling rate and still provide
the optimal digital average, it could be possible to linearize the transducer output (where that
is necessary), exponentially smooth it using analogue circuitry with time constant tc, and then
sample digitally at intervals tc.
Many other types of elaborate filters, computed digitally, have been used for special applications.
Because averaging non‑linear variables creates difficulties when the variables change during
the averaging period, it is important to choose the appropriate linear variable to compute the
average. The table in 3.6 lists some specific examples of elements of a synoptic observation
which are reported as averages, with the corresponding linear variable that should be used.
Besides averaged data, extremes and other variables that are representative for specific periods
must be determined, depending on the purpose of the observation. An example of this is wind
gust measurements, for which higher sampling rates are necessary.
Also, other quantities have to be derived from the averaged data, such as mean sea‑level
pressure, visibility and dew point. At conventional manual stations, conversion tables are used. It
is common practice to incorporate the tables into an AWS and to provide interpolation routines,
or to incorporate the basic formulae or approximations of them. See the various chapters of
Volume I for the data conversion practices, and Volume III, Chapter 1 for AWS practice.
Quantities for which data conversion is necessary when averages are being computed
3.7 CORRECTIONS
The measurements of many meteorological quantities have corrections applied to them either
as raw data or at the Level I or Level II stage to correct for various effects. These corrections are
described in the chapters on the various meteorological variables in Volume I. Corrections to
raw data, for zero or index error, or for temperature, gravity and the like are derived from the
calibration and characterization of the instrument. Other types of corrections or adjustments
to the raw or higher level data include smoothing, such as that applied to cloud height
48 GUIDE TO INSTRUMENTS AND METHODS OF OBSERVATION - VOLUME V
measurements and upper‑air profiles, and corrections for exposure such as those sometimes
applied to temperature, wind and precipitation observations. The algorithms for these types of
corrections may, in some cases, be based on studies that are not entirely definitive; therefore,
while they no doubt improve the accuracy of the data, the possibility remains that different
algorithms may be derived in the future. In such a case, it may become necessary to recover the
original uncorrected data. It is, therefore, advisable for the algorithms to be well documented.
Quality management is discussed in the present volume, Chapter 1. Formal requirements are
specified by WMO (2015) and general procedures are discussed in WMO (2010).
Quality‑control procedures should be performed at each stage of the conversion of raw sensor
output into meteorological variables. This includes the processes involved in obtaining the data,
as well as reducing them to Level II data.
During the process of obtaining data, the quality control should seek to eliminate both
systematic and random measurement errors, errors due to departure from technical standards,
errors due to unsatisfactory exposure of instruments, and subjective errors on the part of
the observer.
Quality control during the reduction and conversion of data should seek to eliminate errors
resulting from the conversion techniques used or the computational procedures involved.
In order to improve the quality of data obtained at high sampling rates, which may generate
increased noise, filtering and smoothing techniques are employed. These are described earlier in
this chapter, as well as in the present volume, Chapter 2.
Metadata are discussed in Volume I, Chapter 1, in the present volume, Chapter 1, and in other
chapters concerning the various meteorological quantities. Metadata must be kept so that:
(a) Original data can be recovered to be re‑worked, if necessary (with different filtering or
corrections, for instance);
(b) The user can readily discover the quality of the data and the circumstances under which it
was obtained (such as exposure);
The procedures used in all the data‑reduction functions described above must therefore
be recorded, generically for each type of data, and individually for each station and
observation type.
REFERENCES AND FURTHER READING
4.1 GENERAL
One of the purposes of WMO, set forth in Article 2 (c) of the WMO Convention, is “to promote
standardization of meteorological and related observations and to ensure the uniform
publication of observations and statistics” (WMO, 2015). For this purpose, sets of standard
procedures and recommended practices have been developed, and their essence is contained in
the present Guide.
Valid observational data can be obtained only when a comprehensive quality assurance
programme is applied to the instruments and the network. Calibration and testing are
inherent elements of a quality assurance programme. Other elements include clear
definition of requirements, instrument selection deliberately based on the requirements,
siting criteria, maintenance and logistics. These other elements must be considered when
developing calibration and test plans. On an international scale, the extension of quality
assurance programmes to include intercomparisons is important for the establishment of
compatible datasets.
National and international standards and guidelines exist for many aspects of testing and
evaluation, and should be used where appropriate. Some of them are referred to in this chapter.
4.1.1 Definitions
Definitions of terms in metrology are given in International Vocabulary of Metrology – Basic and
General Concepts and Associated Terms (VIM) by the Joint Committee for Guides in Metrology
(JCGM, 2012). Many of them are reproduced in Volume I, Chapter 1, and some are repeated
here for convenience. JCGM definitions are strongly recommended for use in meteorology,
although in meteorological practice some commonly used terminology might differ from
them. The JCGM document is a joint production with the International Bureau of Weights and
Measures, IEC, the International Federation of Clinical Chemistry and Laboratory Medicine, the
International Laboratory Accreditation Cooperation, ISO, the International Union of Pure and
Applied Chemistry, the International Union of Pure and Applied Physics and the International
Organization of Legal Metrology.
The VIM terminology differs from common usage in the following respects in particular:
1
See Volume I, Chapter 1, Annex 1.C. For the most recent information on RICs, their terms of reference, locations and
capabilities, see https://community.wmo.int/en/activity-areas/imop/Regional_ Instrument _Centres.
2
See Volume III, Chapter 4, Annex 4.A.
CHAPTER 4. TESTING, CALIBRATION AND INTERCOMPARISON 51
The error of a measurement. The measured quantity value minus a reference quantity value
(the deviation has the other sign). It is composed of the random and systematic errors (the
term bias is commonly used for systematic error).
Before using atmospheric measurements taken with a particular instrument for meteorological
purposes, the answers are needed to a number of questions, as follows:
(b) What is the variability of measurements in a network containing such measuring systems or
instruments?
(c) What change, or bias, will there be in the data provided by the instrument or measuring
system if its siting location is changed?
(d) What change or bias will there be in the data if it replaces a different instrument or
measuring system measuring the same weather element(s)?
To answer these questions and to assure the validity and relevance of the measurements
produced by a meteorological instrument or measuring system, some combination of
calibration, laboratory testing and functional testing is needed.
Calibration and test programmes should be developed and standardized, based on the expected
climatic variability, environmental and electromagnetic interference under which instruments
and measuring systems are expected to operate. For example, considered factors might include
the expected range of temperature, humidity and wind speed; whether or not an instrument
or measuring system must operate in a marine environment, or in areas with blowing dust
or sand; the expected variation in electrical voltage and phase, and signal and power line
electrical transients; and the expected average and maximum electromagnetic interference.
Meteorological Services may purchase calibration and test services from private laboratories and
companies, or set up test organizations to provide those services.
It is most important that at least two like instruments or measuring systems be subjected to each
test in any test programme. This allows for the determination of the expected variability in the
instruments or measuring system, and also facilitates detecting problems.
4.2 TESTING
Instruments and measuring systems are tested to develop information on their performance
under specified conditions of use. Manufacturers typically test their instruments and measuring
52 GUIDE TO INSTRUMENTS AND METHODS OF OBSERVATION - VOLUME V
systems and in some cases publish operational specifications based on their test results.
However, it is extremely important for the user Meteorological Service to develop and carry out
its own test programme or to have access to an independent testing authority.
In general, a test programme is designed to ensure that an instrument or measuring system will
meet its specified performance, maintenance and mean‑time‑between‑failure requirements
under all expected operating, storage and transportation conditions. Test programmes are also
designed to develop information on the variability that can be expected in a network of like
instruments, in functional reproducibility, and in the comparability of measurements between
different instruments or systems.
Users should also have a programme for testing randomly selected production instruments and
measuring systems, even if pre‑production units have been tested, because even seemingly
minor changes in material, configurations or manufacturing processes may affect the operating
characteristics of instruments and measuring systems.
The International Organization for Standardization has standards (ISO, 1999, 2013) which specify
sampling plans and procedures for the inspection of lots of items.
4.2.2.1 Definitions
The following definitions serve to introduce the qualities of an instrument or measuring system
that should be the subject of operational testing:
The International Electrotechnical Commission also has standards (IEC, 2002) to classify
environmental conditions which are more elaborate than the above. They define ranges of
meteorological, physical and biological environments that may be encountered by products
being transported, stored, installed and used, which are useful for equipment specification and
for planning tests.
Environmental tests in the laboratory enable rapid testing over a wide range of conditions, and
can accelerate certain effects such as those of a marine environment with high atmospheric
salt loading. The advantage of environmental tests over field tests is that many tests can be
accelerated in a well‑equipped laboratory, and equipment may be tested over a wide range of
conditions specific to climatic regions. Environmental testing in the laboratory is important; it
can give insight into potential problems and generate confidence to go ahead with field tests,
but it cannot replace field testing.
For example, the United States of America prepared its National Weather Service (NWS) standard
environmental criteria and test procedures (NWS, 1984), based on a study which surveyed and
54 GUIDE TO INSTRUMENTS AND METHODS OF OBSERVATION - VOLUME V
reported the expected operational and extreme ranges of the various weather elements in the
United States operational area, and presented proposed test criteria (NWS, 1980). These criteria
and procedures consist of three parts:
(a) Environmental test criteria and test limits for outdoor, indoor, and transportation/storage
environments;
(b) Test procedures for evaluating equipment against the environmental test criteria;
The prevalence of instruments and automated data collection and processing systems
that contain electronic components necessitates in many cases the inclusion in an overall
test programme for testing performance in operational electrical environments and under
electromagnetic interference.
The document should be based on a study that quantifies the expected power line and signal
line transient levels and rise times caused by natural phenomena, such as thunderstorms.
It should also include testing for expected power variations, both voltage and phase. If the
equipment is expected to operate in an airport environment, or other environment with
possible electromagnetic radiation interference, this should also be quantified and included
in the standard. A purpose of the programme may also be to ensure that the equipment is
not an electromagnetic radiation generator. Particular attention should be paid to equipment
containing a microprocessor and, therefore, a crystal clock, which is critical for timing functions.
Calibration and environmental testing provide a necessary but not sufficient basis for defining
the operational characteristics of an instrument or measuring system, because calibration and
laboratory testing cannot completely define how the instrument or measuring system will
operate in the field. It is impossible to simulate the synergistic effects of all the changing weather
elements on an instrument in all of its required operating environments.
Functional testing is simply testing in the outdoor and natural environment where instruments
are expected to operate over a wide variety of meteorological conditions and climatic regimes,
and, in the case of surface instruments, over ground surfaces of widely varying albedo.
Functional testing is required to determine the adequacy of an instrument or measuring system
while it is exposed to wide variations in wind, precipitation, temperature, humidity, and direct,
diffuse and reflected solar radiation. Functional testing becomes more important as electronic
instruments, such as those using electro‑optic, piezoelectric and capacitive elements, are placed
into operational use. The readings from these instruments may be affected by adventitious
conditions such as insects, spiders and their webs, and the size distribution of particles in the
atmosphere, all of which must be determined by functional tests.
For many applications, comparability must be tested in the field. This is done with side‑by‑side
testing of like and different instruments or measuring systems against a field reference standard.
These concepts are presented in Hoehne (1971, 1972, 1977).
CHAPTER 4. TESTING, CALIBRATION AND INTERCOMPARISON 55
Functional testing may be planned and carried out by the laboratory, preferably accredited,
of the Meteorological Service or of another user organization or private company. For both
the procurement and operation of equipment, the educational and skill level of the observers
and technicians who will use the measuring system must be considered. Use of the equipment
by these staff members should be part of the test programme. The personnel who will install,
use, maintain and repair the equipment should evaluate those portions of the instrument or
measuring system, including the adequacy of the instructions and manuals that they will use in
their job. Their skill level should also be considered when preparing procurement specifications.
4.3 CALIBRATION
Instrument or measuring system calibration is the first step in defining data validity. In general,
it involves comparison against a known standard to determine how closely instrument output
matches the standard over the expected range of operation. Performing laboratory calibration
carries the implicit assumption that the instrument’s characteristics are stable enough to retain
the calibration in the field. A calibration history over successive calibrations should provide
confidence in the instrument’s stability.
Specifically, calibration is the operation that, under specified conditions, in a first step,
establishes a relation between the quantity values with measurement uncertainties provided
by measurement standards and corresponding indications with associated measurement
uncertainties, and, in a second step, uses this information to establish a relation for obtaining
a measurement result from an indication (JCGM, 2012). It should define an instrument’s or
measuring system’s bias or average deviation from the standard against which it is calibrated, its
random errors, the range over which the calibration is valid, and the existence of any thresholds
or non‑linear response regions. It should also define resolution and hysteresis. Hysteresis should
be identified by cycling the sensor over its operating range during calibration. The result of a
calibration is often expressed as a calibration factor or as a series of calibration factors in the
form of a calibration table or calibration curve. The results of a calibration must be recorded in a
document called a calibration certificate or a calibration report.
The calibration certificate or report should define any bias that can then be removed through
mechanical, electrical or software adjustment. The remaining random error is not repeatable and
cannot be removed, but can be statistically defined through a sufficient number of measurement
repetitions during calibration.
4.3.2 Standards
Note: When these standards are relevant to NMHS calibration laboratories or RICs they should also be traceable to
the International System of Units (SI).
Secondary standards often reside in major calibration laboratories and are usually not suitable for
field use. These standards are generally called reference measurement standards, according to
ISO/IEC 17025 (ISO/IEC, 2017). Working standards are usually laboratory instruments that have
been calibrated against a secondary standard. Working standards that may be used in the field
are known as travelling standards. Travelling standard instruments may also be used to compare
instruments in a laboratory or in the field. All of these standards used for a meteorological
purpose and relevant to NMHS calibration laboratories or RICs should be traceable to SI.
4.3.3 Traceability
It is highly recommended that meteorological measurements are traceable, for example, through
travelling standards, working standards and secondary standards to national standards, and that
the accumulated uncertainties are known (except for those that arise in the field, which have to
be determined by field testing).
Manufacturers of meteorological instruments should deliver their quality products, for example,
barometers or thermometers, with calibration certificates or calibration reports issued by
an accredited laboratory. These documents may or may not be included in the basic price of
the instrument, but may be available as options. Calibration certificates given by accredited
CHAPTER 4. TESTING, CALIBRATION AND INTERCOMPARISON 57
calibration laboratories may be more expensive than factory certificates. As discussed in the
previous section, environmental and functional testing, and possibly additional calibration,
should be performed.
Users may also purchase calibration devices or measurement standards for their own
laboratories. A good calibration device should always be combined with a proper measurement
standard, for example, a liquid bath temperature calibrator with certified resistance
thermometers. For the example above, further considerations, such as the use of non‑conductive
silicone fluid, should be applied. Thus, if a temperature‑measurement device is mounted on an
electronic circuit board, the entire board may be immersed in the bath so that the device can be
tested in its operating configuration. Not only the calibration equipment and standards must be
of high quality, but the engineers and technicians of a calibration laboratory must be well trained
in basic metrology and in the use of available calibration devices and measurement standards.
Once instruments have passed initial calibration and testing and are accepted by the user,
a programme of regular calibration checks and calibrations should be instituted. Fragile
instruments are easily subject to breakage when transported to field sites, while others can
be too bulky and heavy for easy transportation. At distant stations, these instruments should
be kept stationary as far as possible, and should be calibrated against more robust travelling
standards that can be moved from one station to another by inspectors. Travelling standards
must be compared frequently against a working standard or reference standard in the calibration
laboratory, and before and after each inspection tour.
Field inspection offers the user the ability to check the instrument on site. Leaving the instrument
installed at a meteorological station eliminates any downtime that would occur while removing
and reinstalling the instrument in the field. Inspection is usually done at one point against the
working standard by placing the working standard as close to the instrument under inspection
(IUI) as possible. Stabilization time must be allowed to reach temperature equilibrium between
the working standard and the IUI. Attention must be paid to the proximity of the working
standard to the IUI, the temperature gradients, the airflow, the pressure differences and any
other factors that could influence the inspection results. This field inspection is an effective way
to verify the instrument quality. The most important disadvantage is that the inspection is usually
limited to one point. The second disadvantage is that if an error is reported, the IUI should be
removed and replaced by a new calibrated sensor. Then the IUI has to be calibrated and adjusted
if possible in a laboratory. It should also be noted that the field inspection provides additional
valuable information as it involves testing the whole instrumental set‑up in the field, including
cabling, and the like. When performing field inspections, it is important that the metadata of the
conditions at the time of the inspection be recorded, including all details on the changes made to
the instrumental set‑up (see additional details provided in Volume III, Chapter 1, 1.7).
Inter-laboratory comparisons
to participate in a minimum of one proficiency test/ILC at least every five years for each major
sub‑discipline of the main disciplines of the laboratory’s scope of accreditation. Participation in
at least one proficiency test/ILC is required prior to the granting of accreditation. As stated in
the RICs’ terms of reference (Volume I, Chapter 1, Annex 1.C), an RIC must participate in and/or
organize ILCs of standard calibration instruments and methods.
An ILC provider conducts and supervises ILCs. It is preferable that an ILC provider is accredited
according to ISO/IEC 17043 (ISO/IEC, 2010). General guidelines for organizing ILCs, developed in
line with the requirements of ISO/IEC 17043, are available in Annex 4.A and should be followed
and implemented as far as possible.
Comparisons or evaluations of instruments and observing systems may be organized and carried
out at the following levels:
(a) International comparisons, in which participants from all interested countries may attend in
response to a general invitation;
(b) Regional intercomparisons, in which participants from countries of a certain region (for
example, WMO Regions) may attend in response to a general invitation;
(c) Multilateral and bilateral intercomparisons, in which participants from two or more
countries may agree to attend without a general invitation;
Reports of particular WMO international comparisons are referenced in other chapters in the
present Guide (see, for instance, Volume I, Chapters 3, 4, 9, 12, 14 and 15). Annex 4.D provides a
list of the international comparisons which have been supported by CIMO and which have been
published in the WMO technical document series.
Reports of comparisons at any level should be made known and available to the meteorological
community at large.
3
Recommendations adopted by CIMO at its eleventh session (1994), through the annex to Recommendation 14
(CIMO‑XI) and Annex IX.
ANNEX 4.A. GUIDELINES FOR ORGANIZING INTER‑LABORATORY
COMPARISONS
1. INTRODUCTION
An ILC is defined by the standard ISO/IEC 17043 (ISO/IEC, 2010) as the organization,
performance and evaluation of calibration and test results for the same or similar item by two
or more laboratories in accordance with predetermined conditions. ILCs offer laboratories the
additional means to assess their ability of competent performance either for the purpose of the
assessment by accreditation bodies or for their internal quality assurance process. ILC techniques
vary depending on the nature of the test item, the method in use and the number of laboratories
participating. Usually ILCs involve a test item to be measured or calibrated being circulated
successively among participating laboratories.
Following the definitions of ISO/IEC 17043, the ILC provider is an organization that takes
responsibility for all tasks in the development and operation of an ILC. An ILC coordinator is one
or more individuals with responsibility for organizing and managing all of the activities involved
in the operation of an ILC.
2.1.2 Responsibilities of the ILC provider that need to be met in the ILC are: initiation,
planning, appropriate instrument selection, operation of specific equipment, handling and
distribution of ILC items, operation of the data‑processing system, conducting statistical analysis,
performance evaluation of ILC participants, offering opinions and interpretations, and issuance
and authorization of the ILC report.
2.2.1.1 An ILC protocol should be agreed upon by participants and must be documented
before commencement of the ILC. It should include at least the following information:
(b) Name, address and affiliation of the ILC coordinator and other personnel involved in the
design and operation of the ILC scheme;
(c) Activities to be subcontracted and the names of subcontractors involved in the operation of
the ILC scheme;
(h) Requirements for the production, quality control, storage and distribution of ILC items;
(j) Description of the information that is to be supplied to participants and the time schedule
for the various phases of the ILC scheme;
(k) Dates when ILC items are to be distributed to participants, the deadlines for the return of
results by participants and, where appropriate, the dates on which testing or measurement
is to be carried out by participants;
(l) Any information on methods or procedures that participants need to use to prepare the test
materials and perform the tests or measurements;
(m) Procedures for the test or measurement methods to be used for the homogeneity and
stability testing of ILC items;
(p) The origin, metrological traceability and measurement uncertainty of any assigned values;
(s) Description of the extent to which participant results, and the conclusions that will be
based on the outcome of the ILC scheme, are to be made public.
2.2.1.2 The ILC provider must ensure access to the necessary technical expertise and
experience. This may be achieved by establishing an advisory group, whose responsibilities
include, but are not limited to, the following: supervising the selection and preparation of
the test item, supervising the drawing of the protocol, supervising the choice of method and
procedure, supervising all the communication with participants, taking care that the time
schedule is met, informing participants about delays, informing each participant about the next
participant of the scheme, supervising issuing of the invoice, and supervising issuing of interim
and final report.
2.2.2.1 Test items have to match the needs of ILC participants. Test item preparation
includes its selection. Initially, it is necessary to specify characteristics of the test item, such as
stability, range, resolution, uncertainty, and the like. Then a suitable test item is acquired, either
chosen from existing equipment on stock or purchased. After that, the chosen test item is tested
(measured several times, submitted to the conditions that can be expected during transport and
measurements at the participating laboratories) in order to confirm the specified characteristics.
If tests are successful, the item is used for the ILC.
2.2.2.2 Test items with a stability worse than the uncertainty of any of the participating
laboratories are not used for the ILC scheme, unless otherwise agreed in advance with
participants.
CHAPTER 4. TESTING, CALIBRATION AND INTERCOMPARISON 61
Preliminary stability checks must be made and periodic checks of assigned property values
should be carried out throughout the course of the ILC. Where appropriate, the property values
to be determined in the ILC must be measured periodically, preferably over a range of conditions
under which the test item is to be stored prior to distribution. Test items must demonstrate
sufficient stability to ensure that they will not undergo any significant change throughout the
conduct of the ILC.
Participants in ILCs are expected to use a test method, calibration or measurement procedure
of their choice, which is consistent with routine procedures used in their laboratories. Under
certain circumstances, the ILC provider may instruct participants to use a specified method.
When participants are allowed to use a method of their choice, the ILC provider can, whenever
appropriate, request details of the chosen method, in order to properly interpret the results
obtained by different test methods.
The ILC provider should give detailed documented instructions to all participants that are
usually included as an integral part of the ILC protocol. Instructions to participants must include
details of factors that could influence the testing of the test items; the nature of the test items;
the test procedure employed; and the timing of the testing. Specific instructions related to the
recording and reporting of test results must include, but are not necessarily limited to, the units
of measurement, the number of significant figures, reporting basis, and the latest date for receipt
of test results.
2.3.2.1 To avoid any damage to the ILC items, the ILC provider should preserve and
segregate all ILC items, for example, from any potential damaging influence of humidity,
temperature, electricity and magnetic field, prior to their distribution to ILC participants. For
each ILC, the items must be characterized in terms of specifications related to environmental
conditions that could occur during transport.
2.3.2.2 The ILC item should be protected from any adjustment (either by a
password‑protected part of the test item, or by a single‑usage seal).
2.3.2.3 The ILC provider should ensure adequate packaging of all ILC items and provide
secure storage areas and/or stock rooms which prevent damage or deterioration of any item
prior to distribution. When appropriate, the conditions of all stored or stocked items should be
assessed at specified intervals during their storage life in order to detect possible deterioration.
The ILC provider should control packaging and marking processes to the extent necessary to
ensure conformity with relevant regional, national and/or international safety and transport
requirements.
62 GUIDE TO INSTRUMENTS AND METHODS OF OBSERVATION - VOLUME V
2.4.1.1 Results received from participants must be promptly recorded and analysed by
appropriately documented statistical procedures. In case of doubtful results after data analysis,
the ILC provider must promptly ask the participant that has generated the results to check
them. Before the final report is issued to the participants, all the participants should check their
data and confirm their consistency. Every participant of the ILC scheme, in accordance with the
protocol, should report all the relevant results and their uncertainties in a dedicated spreadsheet
table. Data analysis should include at least a summary of the measurement, performance
statistics and associated information consistent with the ILC statistical model and objectives. Two
steps are common to all ILCs:
(a) Determination of the assigned values – there are various procedures available for
establishment of the assigned values:
(i) Reference values – as determined by the ILC provider, based on analysis, measurement
or comparison of a test item alongside a standard, traceable to a national or
international standard;
(ii) Consensus values from expert laboratories – such laboratories should have
demonstrable competence.
The assigned value(s) must not be disclosed to participants until after the results
have been collated. The uncertainty of assigned values should be determined
using procedures described in Guide to the Expression of Uncertainty in Measurement
(ISO/IEC, 2008).
ILC results often need to be transformed into performance statistics for the purposes of
interpretation and comparison. The objective is to measure the deviation from the assigned
value in a manner that allows evaluation of performance. A commonly used statistic for
quantitative results in measurement comparison schemes is the En number:
xlab − xref
En =
2
U lab + U ref
2
where xlab is the participant’s result, xref is the assigned value, Ulab is the expanded (k = 2)
uncertainty of the participant’s result and Uref is the expanded (k = 2) uncertainty of the reference
laboratory’s assigned value.
where x is the participant's result, X is the assigned value and σ∧ is the “standard deviation for
ILC” that can be calculated from the following:
2.4.2.1 The ILC provider is responsible for ensuring that the method of evaluation is
appropriate for maintenance of the credibility of the ILC. Such a method must be documented in
the ILC protocol and must include a description of the basis upon which the evaluation is made.
Criteria for performance evaluation is based on statistical determination En:
En ≤ 1 = satisfactory
En > 1 = unsatisfactory
or z:
2.4.2.2 Graphs should be used whenever possible to show performance. They should show
distributions of participant values, relationships between results on multiple test items and
comparative distributions for different methods.
The content of ILC reports can vary, depending on the purpose of a particular scheme, but each
report must be clear and comprehensive and must include data on the distribution of results of
all participants, together with an indication of the performance of individual participants. The
following information must normally be included in reports of ILC schemes:
(g) Statistical data and summaries, including assigned values and range of acceptable results
and graphical displays;
(i) Details of the traceability and uncertainty of assigned values, where applicable;
(j) Assigned values and summary statistics for test methods/procedures used by other
participants (if different methods are used by different participants);
64 GUIDE TO INSTRUMENTS AND METHODS OF OBSERVATION - VOLUME V
(k) Comments on participants’ performance by the ILC provider and technical advisers;
(l) Procedures used to design and implement the scheme (what may include reference to a
scheme protocol);
2.5 Confidentiality
2.5.1 The identity of participants in an ILC is usually confidential and known only to the
minimum number of persons involved in the provision and evaluation of the ILC. All information
supplied by a participant to the ILC provider must be treated as confidential.
2.5.2 Participants may agree on waived confidentiality of their identity in the ILC protocol
and/or in the ILC report.
2. The Executive Council will consider the approval of the intercomparison and its inclusion in
the programme and budget of WMO.
3. When there is an urgent need to carry out a specific intercomparison that was not
considered at the session of a constituent body, the president of the relevant body may
submit a corresponding proposal to the President of WMO for approval.
5. When at least one Member has agreed to act as host country and a reasonable number
of Members have expressed their interest in participating, an international organizing
committee should be established by the president of CIMO in consultation with the heads
of the constituent bodies concerned, if appropriate.
6. Before the intercomparison begins, the organizing committee should agree on its
organization, for example, at least on the main objectives, place, date and duration
of the intercomparison, conditions for participation, data acquisition, processing and
analysis methodology, plans for the publication of results, intercomparison rules, and the
responsibilities of the host(s) and the participants.
7. The host should nominate a project leader who will be responsible for the proper conduct
of the intercomparison, the data analysis, and the preparation of a final report of the
intercomparison as agreed upon by the organizing committee. The project leader will be a
member ex officio of the organizing committee.
8. When the organizing committee has decided to carry out the intercomparison at sites
in different host countries, each of these countries should designate a site manager. The
responsibilities of the site managers and the overall project management will be specified
by the organizing committee.
10. All further communication between the host(s) and the participants concerning
organizational matters will be handled by the project leader and possibly by the site
managers unless other arrangements are specified by the organizing committee.
11. Meetings of the organizing committee during the period of the intercomparison could be
arranged, if necessary.
66 GUIDE TO INSTRUMENTS AND METHODS OF OBSERVATION - VOLUME V
12. After completion of the intercomparison, the organizing committee shall discuss and
approve the main results of the data analysis of the intercomparison and shall make
proposals for the utilization of the results within the meteorological community.
13. The final report of the intercomparison, prepared by the project leader and approved by
the organizing committee, should be published in the WMO Instruments and Observing
Methods Report series.
ANNEX 4.C. GUIDELINES FOR ORGANIZING WMO INTERCOMPARISONS
OF INSTRUMENTS
1. INTRODUCTION
1.1 These guidelines are complementary to the procedures of WMO global and regional
intercomparisons of meteorological instruments. They assume that an international organizing
committee has been set up for the intercomparison and provide guidance to the organizing
committee for its conduct. In particular, see Volume I, Chapter 12, Annex 12.D.
1.2 However, since all intercomparisons differ to some extent from each other, these
guidelines should be considered as a generalized checklist of tasks. They should be modified as
situations so warrant, keeping in mind the fact that fairness and scientific validity should be the
criteria that govern the conduct of WMO intercomparisons and evaluations.
1.3 Final reports of other WMO intercomparisons and the reports of meetings of
organizing committees may serve as examples of the conduct of intercomparisons. These are
available from the World Weather Watch Department of the WMO Secretariat.
The organizing committee should examine the achievements to be expected from the
intercomparison and identify the particular problems that may be expected. It should prepare
a clear and detailed statement of the main objectives of the intercomparison and agree on any
criteria to be used in the evaluation of results. The organizing committee should also investigate
how best to guarantee the success of the intercomparison, making use of the accumulated
experience of former intercomparisons, as appropriate.
3.1 The host country should be requested by the Secretariat to provide the organizing
committee with a description of the proposed intercomparison site and facilities (location(s),
environmental and climatological conditions, major topographic features, and so forth). It should
also nominate a project leader.1
3.2 The organizing committee should examine the suitability of the proposed site and
facilities, propose any necessary changes, and agree on the site and facilities to be used. A full site
and environmental description should then be prepared by the project leader. The organizing
committee, in consultation with the project leader, should decide on the date for the start and
the duration of the intercomparison.
3.3 The project leader should propose a date by which the site and its facilities will be
available for the installation of equipment and its connection to the data‑acquisition system.
The schedule should include a period of time to check and test equipment and to familiarize
operators with operational and routine procedures.
1
When more than one site is involved, site managers shall be appointed, as required. Some tasks of the project leader,
as outlined in this annex, shall be delegated to the site managers.
68 GUIDE TO INSTRUMENTS AND METHODS OF OBSERVATION - VOLUME V
4.1 The organizing committee should consider technical and operational aspects,
desirable features and preferences, restrictions, priorities, and descriptions of different
instrument types for the intercomparison.
4.2 Normally, only instruments in operational use or instruments that are considered
for operational use in the near future by Members should be admitted. It is the responsibility of
the participating Members to calibrate their instruments against recognized standards before
shipment and to provide appropriate calibration certificates. Participants may be requested to
provide two identical instruments of each type in order to achieve more confidence in the data.
However, this should not be a condition for participation.
(a) The Secretary‑General to invite officially Members (who have expressed an interest) to
participate in the intercomparison. The invitation shall include all necessary information
on the rules of the intercomparison as prepared by the organizing committee and the
project leader;
(b) The project leader to handle all further contact with participants.
5. DATA ACQUISITION
5.1.1 The organizing committee should evaluate a proposed layout of the instrument
installation prepared by the project leader and agree on a layout of instruments for the
intercomparison. Special attention should be paid to fair and proper siting and exposure
of instruments, taking into account criteria and standards of WMO and other international
organizations. The adopted siting and exposure criteria shall be documented.
The host country should make every effort to include at least one reference instrument in
the intercomparison. The calibration of this instrument should be traceable to national or
international standards. A description and specification of the standard should be provided to
the organizing committee. If no recognized standard or reference exists for the variable(s) to be
measured, the organizing committee should agree on a method to determine a reference for the
intercomparison.
CHAPTER 4. TESTING, CALIBRATION AND INTERCOMPARISON 69
5.4.1 Normally the host country should provide the necessary data‑acquisition system
capable of recording the required analogue, pulse and digital (serial and parallel) signals from all
participating instruments. A description and a block diagram of the full measuring chain should
be provided by the host country to the organizing committee. The organizing committee, in
consultation with the project leader, should decide whether analogue chart records and visual
readings from displays will be accepted in the intercomparison for analysis purposes or only for
checking the operation.
5.4.2 The data‑acquisition system hardware and software should be well tested before the
comparison is started and measures should be taken to prevent gaps in the data record during
the intercomparison period.
The organizing committee should agree on an outline of a time schedule for the intercomparison,
including normal and specific tasks, and prepare a time chart. Details should be further worked
out by the project leader and the project staff.
6.1.1 All essential data of the intercomparison, including related meteorological and
environmental data, should be stored in a database for further analysis under the supervision of
the project leader. The organizing committee, in collaboration with the project leader, should
propose a common format for all data, including those reported by participants during the
intercomparison. The organizing committee should agree on near‑real‑time monitoring and
quality‑control checks to ensure a valid database.
6.1.2 After completion of the intercomparison, the host country should, on request,
provide each participating Member with a dataset from its submitted instrument(s). This set
should also contain related meteorological, environmental and reference data.
70 GUIDE TO INSTRUMENTS AND METHODS OF OBSERVATION - VOLUME V
6.2.1 The organizing committee should propose a framework for data analysis and
processing and for the presentation of results. It should agree on data conversion, calibration and
correction algorithms, and prepare a list of terms, definitions, abbreviations and relationships
(where these differ from commonly accepted and documented practice). It should elaborate and
prepare a comprehensive description of statistical methods to be used that correspond to the
intercomparison objectives.
6.2.4 Normally the project leader should be responsible for the data‑processing and
analysis. The project leader should, as early as possible, verify the appropriateness of the selected
analysis procedures and, as necessary, prepare interim reports for comment by the members
of the organizing committee. Changes should be considered, as necessary, on the basis of
these reviews.
6.2.5 After completion of the intercomparison, the organizing committee should review
the results and analysis prepared by the project leader. It should pay special attention to
recommendations for the utilization of the intercomparison results and to the content of the
final report.
7.1 The organizing committee should draft an outline of the final report and request the
project leader to prepare a provisional report based on it.
7.2 The final report of the intercomparison should contain, for each instrument, a
summary of key performance characteristics and operational factors. Statistical analysis results
should be presented in tables and graphs, as appropriate. Time‑series plots should be considered
for selected periods containing events of particular significance. The host country should be
invited to prepare a chapter describing the database and facilities used for data‑processing,
analysis and storage.
7.3 The organizing committee should agree on the procedures to be followed for
approval of the final report, such as:
(a) The draft final report will be prepared by the project leader and submitted to all organizing
committee members and, if appropriate, also to participating Members;
(b) Comments and amendments should be sent back to the project leader within a specified
time limit, with a copy to the chairperson of the organizing committee;
(c) When there are only minor amendments proposed, the report can be completed by the
project leader and sent to the WMO Secretariat for publication;
CHAPTER 4. TESTING, CALIBRATION AND INTERCOMPARISON 71
(d) In the case of major amendments or if serious problems arise that cannot be resolved by
correspondence, an additional meeting of the organizing committee should be considered
(the president of CIMO should be informed of this situation immediately).
7.4 The organizing committee may agree that intermediate and final results may be
presented only by the project leader and the project staff at technical conferences.
8. RESPONSIBILITIES
8.1.1 Participants shall be fully responsible for the transportation of all submitted
equipment, all import and export arrangements, and any costs arising from these. Correct
import/export procedures shall be followed to ensure that no delays are attributable to
this process.
8.1.2 Participants shall generally install and remove any equipment under the supervision
of the project leader, unless the host country has agreed to do this.
8.1.3 Each participant shall provide all necessary accessories, mounting hardware, signal
and power cables and connectors (compatible with the standards of the host country), spare
parts and consumables for its equipment. Participants requiring a special or non‑standard
power supply shall provide their own converter or adapter. Participants shall provide all
detailed instructions and manuals needed for installation, operation, calibration and routine
maintenance.
8.2.1 The host country should provide, if asked, the necessary information to participating
Members on temporary and permanent (in the case of consumables) import and export
procedures. It should assist with the unpacking and installation of the participants’ equipment
and provide rooms or cabinets to house equipment that requires protection from the weather
and for the storage of spare parts, manuals, consumables, and so forth.
8.2.3 The necessary electrical power for all instruments shall be provided. Participants
should be informed of the network voltage and frequency and their stability. The connection
of instruments to the data‑acquisition system and the power supply will be carried out in
collaboration with the participants. The project leader should agree with each participant on the
provision, by the participant or the host country, of power and signal cables of adequate length
(and with appropriate connectors).
8.2.4 The host country should be responsible for obtaining legal authorization related
to measurements in the atmosphere, such as the use of frequencies, the transmission of laser
radiation, compliance with civil and aeronautical laws, and so forth. Each participant shall submit
the necessary documents at the request of the project leader.
8.2.5 The host country may provide information on accommodation, travel, local
transport, daily logistic support, and so forth.
72 GUIDE TO INSTRUMENTS AND METHODS OF OBSERVATION - VOLUME V
8.3.1 Routine operator servicing by the host country will be performed only for long‑term
intercomparisons for which absence of participants or their representatives can be justified.
8.3.2 When responsible for operator servicing, the host country should:
(a) Provide normal operator servicing for each instrument, such as cleaning, chart changing,
and routine adjustments as specified in the participant’s operating instructions;
(b) Check each instrument every day of the intercomparison and inform the nominated contact
person representing the participant immediately of any fault that cannot be corrected by
routine maintenance;
(c) Do its utmost to carry out routine calibration checks according to the participant’s specific
instructions.
8.3.3 The project leader should maintain in a log regular records of the performance of
all equipment participating in the intercomparison. This log should contain notes on everything
at the site that may have an effect on the intercomparison, all events concerning participating
equipment, and all events concerning equipment and facilities provided by the host country.
9.1 The project leader shall exercise general control of the intercomparison on behalf of
the organizing committee.
9.2 No changes to the equipment hardware or software shall be permitted without the
concurrence of the project leader.
9.3 Minor repairs, such as the replacement of fuses, will be allowed with the
concurrence of the project leader.
9.5 Any problems that arise concerning the participants’ equipment shall be addressed
to the project leader.
9.6 The project leader may select a period during the intercomparison in which
equipment will be operated with extended intervals between normal routine maintenance in
order to assess its susceptibility to environmental conditions. The same extended intervals will be
applied to all equipment.
ANNEX 4.D. REPORTS OF INTERNATIONAL COMPARISONS CONDUCTED UNDER THE AUSPICES OF
THE COMMISSION FOR INSTRUMENTS AND METHODS OF OBSERVATION
The following table is sorted by topic or instrument, in alphabetical order, and the reports for each topic are listed in reverse chronological order.
Note: For the most recent reports see https://community.wmo.int/en/activity-areas/imop/publications-and-iom-reports. The reports of the WMO International Pyrheliometer
Intercomparisons, conducted by the World Radiation Centre at Davos (Switzerland) and carried out at five‑yearly intervals, are also distributed by WMO.
Instruments and
Topic Observing Methods Title of report
Report No.
Barometers 46 The WMO Automatic Digital Barometer Intercomparison (de Bilt, Netherlands, 1989–1991), J.P. van der Meulen,
WMO/TD-No. 474 (1992)
Ceilometers 32 WMO International Ceilometer Intercomparison (United Kingdom, 1986), D.W. Jones, et al, WMO/TD-No. 217 (1988)
Humidity instruments 106 WMO Field Intercomparison of Thermometer Screens/Shields and Humidity Measuring Instruments (Ghardaïa, Algeria,
November 2008–October 2009), M. Lacombe, et al., WMO/TD-No. 1579 (2011)
Humidity instruments 38 WMO International Hygrometer Intercomparison (Oslo, Norway, 1989), J. Skaar, et al., WMO/TD-No. 316 (1989)
Humidity instruments 34 WMO Assmann Aspiration Psychrometer Intercomparison (Potsdam, German Democratic Republic, 1987), D. Sonntag,
WMO/TD-No. 289 (1989)
Precipitation gauges 67 WMO Solid Precipitation Measurement Intercomparison – Final Report, B.E. Goodison, et al., WMO/TD-No. 872 (1998)
Precipitation gauges 17 International Comparison of National Precipitation Gauges with a Reference Pit Gauge (1984), B. Sevruk and W.R. Hamon,
WMO/TD-No. 38 (1984)
Present weather 73 WMO Intercomparison of Present Weather Instruments/Systems – Final Report (Canada and France, 1993–1995),
instruments M. Leroy, et al., WMO/TD-No. 887 (1998)
Pyranometers 98 Sub-Regional Pyranometer Intercomparison of the RA VI members from South-Eastern Europe (Split, Croatia,
22 July–6 August 2007), K. Premec, WMO/TD-No. 1501 (2009)
Pyranometers 16 Radiation and Sunshine Duration Measurements: Comparison of Pyranometers and Electronic Sunshine Duration Recorders
of RA VI (Budapest, Hungary, July–December 1984), G. Major, WMO/TD-No. 146 (1986)
Pyrgeometers 129 Second International Pyrgeometer Intercomparison – Final Report, (Davos, Switzerland, 27 September–15 October 2015),
J. Gröbner and C. Thomann (2018)
Pyrheliometers 124 WMO International Pyrheliometer Comparison (Davos, Switzerland, 28 September–16 October 2015), W. Finsterle (2016)
74 GUIDE TO INSTRUMENTS AND METHODS OF OBSERVATION - VOLUME V
Instruments and
Topic Observing Methods Title of report
Report No.
Pyrheliometers 113 Third WMO Regional Pyrheliometer Comparison of RA II (Tokyo, 23 January – 3 February 2012),
N. Ohkawara, et al. (2013)
Pyrheliometers 112 Baltic Region Pyrheliometer Comparison (Norrköping, Sweden, 21 May–1 June 2012), T. Carlund (2013)
Pyrheliometers 108 WMO International Pyrheliometer Comparison (Davos, Switzerland, 27 September–15 October 2010), W. Finsterle (2011)
Pyrheliometers 97 Second WMO Regional Pyrheliometer Comparison of RA II (Tokyo, 22 January–2 February 2007), H. Sasaki,
WMO/TD-No. 1494 (2009)
Pyrheliometers 91 International Pyrheliometer Comparison – Final Report (Davos, Switzerland, 26 September–14 October 2005), W. Finsterle,
WMO/TD-No. 1320 (2006)
Pyrheliometers 64 Tercera Comparación Regional de la OMM de Pirheliómetros Patrones Nacionales AR III – Informe Final (Santiago, Chile,
24 February–7 March 1997), M.V. Muñoz, WMO/TD-No. 861 (1997)
Pyrheliometers 53 Segunda Comparación de la OMM de Pirheliómetros Patrones Nacionales AR III (Buenos Aires, Argentina,
25 November–13 December 1991), M. Ginzburg, WMO/TD-No. 572 (1992)
Pyrheliometers 44 First WMO Regional Pyrheliometer Comparison of RA IV (Ensenada, Mexico, 20–27 April 1989), I. Galindo,
WMO/TD-No. 345 (1989)
Pyrheliometers 43 First WMO Regional Pyrheliometer Comparison of RA II and RA V (Tokyo, 23 January–4 February 1989), Y. Sano,
WMO/TD-No. 308 (1989)
Radiosondes 107 WMO Intercomparison of High Quality Radiosonde Systems (Yangjiang, China, 12 July–3 August 2010), J. Nash, et al.,
WMO/TD-No. 1580 (2011) (clarification note)
Radiosondes 90 WMO Intercomparison of GPS Radiosondes – Final Report (Alcantâra, Brazil, 20 May–10 June 2001), R. da Silveira, et al.,
WMO/TD-No. 1314 (2006)
Radiosondes 85 WMO Radiosonde Humidity Sensor Intercomparison, Final Report of Phase I and Phase II (Phase I: Russian Federation,
1995–1997; Phase II: USA, 8–26 September 1995), Phase I: A. Balagurov, et al.; Phase II: F. Schmidlin,
WMO/TD-No. 1305 (2006)
Radiosondes 83 WMO Intercomparison of Radiosonde Systems – Final Report (Vacoas, Mauritius, 2–25 February 2005), J. Nash, et al., WMO/
TD-No. 1303 (2006)
Radiosondes 76 Executive Summary of the WMO Intercomparison of GPS Radiosondes (Alcantâra, Maranhão, Brazil, 20 May–10 June 2001),
R.B. da Silveira, et al., WMO/TD-No. 1153 (2003)
Radiosondes 59 WMO International Radiosonde Comparison, Phase IV (Tsukuba, Japan, 15 February–12 March 1993), S. Yagi, et al.,
WMO/TD-No. 742 (1996)
CHAPTER 4. TESTING, CALIBRATION AND INTERCOMPARISON 75
Instruments and
Topic Observing Methods Title of report
Report No.
Radiosondes 40 WMO International Radiosonde Comparison, Phase III (Dzhambul, USSR, 1989), A. Ivanov, et al., WMO/TD-No. 451 (1991)
Radiosondes 30 WMO International Radiosonde Comparison (United Kingdom, 1984/United States, 1985), J. Nash and F.J. Schmidlin,
WMO/TD-No. 195 (1987)
Radiosondes 29 WMO International Radiosonde Intercomparison Phase II (Wallops Island, United States, 4 February–15 March 1985),
F.J. Schmidlin, WMO/TD-No. 312 (1988)
Radiosondes 28 WMO International Radiosonde Comparison Phase I (Beaufort Park, United Kingdom, 1984), A.H. Hooper,
WMO/TD-No. 174 (1986)
Rainfall intensity 99 WMO Field Intercomparison of Rainfall Intensity Gauges (Vigna di Valle, Italy, October 2007–April 2009), E. Vuerich, et al.,
gauges WMO/TD-No. 1504 (2009)
Rainfall intensity 84 WMO Laboratory Intercomparison of Rainfall Intensity Gauges – Final Report (France, The Netherlands, Italy,
gauges September 2004–September 2005), L. Lanza, et al., WMO/TD-No. 1304 (2006)
Sunshine duration 16 Radiation and Sunshine Duration Measurements: Comparison of Pyranometers and Electronic Sunshine Duration Recorders
recorders of RA VI (Budapest, July–December 1984), G. Major, WMO/TD-No. 146 (1986)
Thermometer screens 106 WMO Field Intercomparison of Thermometer Screens/Shields and Humidity Measuring Instruments (Ghardaïa, Algeria,
November 2008–October 2009), M. Lacombe, et al., WMO/TD-No. 1579 (2011)
Visibility instruments 41 The First WMO Intercomparison of Visibility Measurements – Final Report (United Kingdom, 1988/1989), D.J. Griggs, et al.,
WMO/TD-No. 401 (1990)
Wind instruments 62 WMO Wind Instrument Intercomparison (Mont Aigoual, France, 1992–1993), P. Gregoire and G. Oualid,
WMO/TD-No. 859 (1997)
REFERENCES AND FURTHER READING
Hoehne, W.E. Standardizing Functional Tests. NOAA Technical Memorandum NWS T&EL-12; United States
Department of Commerce: Stirling, Virginia, 1971. https://repository.library.noaa.gov/
view/noaa/33644
Hoehne, W.E., 1972: Standardizing functional tests. Preprints of the Second Symposium on Meteorological
Observations and Instrumentation, American Meteorological Society, pp. 161–165.
Hoehne, W.E. Progress and Results of Functional Testing. NOAA Technical Memorandum, NWS T&EL-15;
United States Department of Commerce: Stirling, Virginia, 1977.
International Electrotechnical Commission (IEC). Classification of Environmental Conditions –
Part 1: Environmental Parameters and their Severities; IEC 60721-1; Geneva, 2002. https://webstore
.iec.ch/publication/3030
International Organization for Standardization (ISO). Sampling procedures for inspection by attributes –
Part 1: Sampling schemes indexed by acceptance quality limit (AQL) for lot-by-lot inspection;
ISO 2859-1:1999; Geneva, 1999. https://www.iso.org/standard/1141.html
International Organization for Standardization (ISO). Sampling procedures for inspection by variables
– Part 1: Specification for single sampling plans indexed by acceptance quality limit (AQL)
for lot-by-lot inspection for a single quality characteristic and a single AQL; ISO 3951-1:2022;
Geneva, 2022. https://www.iso.org/standard/74706.html
International Organization for Standardization (ISO)/International Electrotechnical Commission (IEC)
Uncertainty of Measurement – Part 3: Guide to the Expression of Uncertainty in Measurement
(GUM: 1995); ISO/IEC Guide 98-3:2008, incl. Suppl. 1:2008/Cor 1:2009, Suppl. 1:2008,
Suppl. 2:2011; Geneva, 2008 (Equivalent to: JCGM, 2008: Evaluation of Measurement Data –
Guide to the Expression of Uncertainty in Measurement; JCGM 100:2008, corrected in 2010, incl.
JCGM 101:2008, JCGM 102:2011M; 2008).
International Organization for Standardization (ISO)/International Electrotechnical Commission (IEC).
Conformity assessment – General requirements for the competence of proficiency testing
providers; ISO/IEC 17043:2023; Geneva, 2023. https://www.iso.org/standard/8 0864.html
International Organization for Standardization (ISO)/International Electrotechnical Commission (IEC).
General requirements for the competence of testing and calibration laboratories; ISO/IEC 17025:2017;
Geneva, 2017. https://www.iso.org/standard/66912.html
Joint Committee for Guides in Metrology (JCGM). International Vocabulary of Metrology – Basic and
General Concepts and Associated Terms (VIM); JCGM 200:2012; 2012.
National Weather Service (NWS). Natural Environmental Testing Criteria and Recommended Test
Methodologies for a Proposed Standard for National Weather Service Equipment; United
States Department of Commerce: Stirling, Virginia, 1980.
National Weather Service (NWS). NWS Standard Environmental Criteria and Test Procedures; United States
Department of Commerce: Stirling, Virginia, 1984.
World Meteorological Organization (WMO) 2015. Basic Documents No. 1 (WMO-No. 15), Geneva, 2015.
World Meteorological Organization (WMO) 2018. J. Bojkovski; J. Drnovsek; D. Groselj et al. Interlaboratory
Comparison in the field of Temperature, Humidity and Pressure, in the WMO Regional Association
VI (MM-ILC-2015-THP), Report No. 128, World Meteorological Organization (WMO):
Geneva, 2018.
World Meteorological Organization (WMO)/International Council of Scientific Unions (ICU) 1986. Revised
Instruction Manual on Radiation Instruments and Measurements (WMO/TD-No. 149). World
Climate Research Programme (WCRP) Publications Series No. 7. Geneva, 1986.
CHAPTER 5. TRAINING OF INSTRUMENT SPECIALISTS
5.1 INTRODUCTION
5.1.1 General
Given that the science and application of meteorology rely increasingly on continuous series
of measurements using instruments and systems of increasing sophistication, this chapter
focuses on the training of those specialists who deal with all aspects of these systems: the
planning, specification, design, installation, calibration, maintenance and operation of the
meteorological measuring instruments and remote‑sensing systems, and the management of
observational programmes and networks. To a lesser extent, this chapter also deals with the
training requirements for those performing manual observations. Competency frameworks
for all of these specialists are provided in Annexes 5.A to 5.D. This chapter is aimed at technical
managers and trainers and at the observations and instrument specialists who wish to advance
in their profession.
Training skilled personnel is critical to the availability of necessary and appropriate technologies
in all countries so that the WMO Integrated Global Observing System (WIGOS) can produce
cost‑effective data of uniform good quality and timeliness. However, more than just technical
capability with instruments is required. Modern meteorology requires technologists who are also
capable as planners and project managers, knowledgeable about telecommunications and data
processing, good advocates for effective technical solutions, and skilled in the areas of financial
budgets and people management. Thus, for most instrument specialists or meteorological
instrument system engineers, training programmes should be broad‑based and include personal
development and management skills as well as expertise in modern technology.
WMO Regional Training Centres (RTCs) have been established in many countries, and many
of them offer training in various aspects of the operation and management of instruments and
instrument systems. Similarly, Regional Instrument Centres (RICs), Regional Marine Instrument
Centres (RMICs) and Regional WIGOS Centres (RWCs) have been set up in many places, and
some of them can provide training.
Training is a vital part of the process of technology transfer, which is the developmental process
of introducing new technical resources into service to improve quality and reduce operating
costs. New resources demand new skills for the introductory process and for ongoing operation
and maintenance. This human dimension is more important in capacity development than the
technical material.
As meteorology is a global discipline, the technology gap between developed and developing
nations is a particular issue for technology transfer. Providing effective training strategies,
programmes and resources that foster self‑sustaining technical infrastructures and build human
capacity in developing countries is a goal that must be kept constantly in view.
This chapter deals with training mainly as an issue for National Meteorological and Hydrological
Services (NMHSs). However, the same principles apply to any organizations that take
meteorological measurements, whether they train their own staff or expect to recruit suitably
qualified personnel. In common with all the observational sciences, the benefits of training are
self‑evident; it ensures standardized measurement procedures and the most effective use and
care of equipment.
78 GUIDE TO INSTRUMENTS AND METHODS OF OBSERVATION - VOLUME V
Taking measurements using instrument systems depends on physical principles (for example, the
change in resistance) to sense the atmospheric variables and transduce them into a standardized
form that is convenient for the user (for example, an electrical signal to input into an automatic
weather station (AWS)). The theoretical basis for understanding the measurement process
must also take into account the coupling of the instrument to the quantity being measured
(the representation or exposure) as well as the instrumental and observational errors with which
every measurement is fraught. The basic measurement data are then often further processed
and coded in more or less complex ways, thus requiring further theoretical understanding
(for example, the reduction of atmospheric pressure to mean sea level, or upper‑air messages
derived from a radiosonde flight).
Taking the measurement also depends on practical knowledge and skill in terms of how to
install and set up the instrument to take a standardized measurement, how to operate it safely
and accurately, and how to carry out any subsequent calculations or coding processes with
minimal error.
Thus, theoretical and practical matters are closely related in achieving measurement data
of known quality, and the personnel concerned in the operation and management of the
instrument systems need theoretical understanding and practical skills that are appropriate to
the complexity and significance of their work. The engineers who design or maintain complex
instrumentation systems require a particularly high order of theoretical and practical training.
Organizations need to ensure that the qualifications, skills and numbers of their personnel or
other contractors (and thus training) are well matched to the range of tasks to be performed.
For example, the training needed to read air temperature in a Stevenson screen is at the lower
end of the range of necessary skills, while theoretical and practical training at a much higher
level is plainly necessary to specify, install, operate and maintain AWSs, meteorological satellite
receivers and radars.
Therefore, it is useful to apply a classification scheme for the levels of qualification for operational
requirements, employment, and training purposes. The national grades of qualification in
technical education applicable in a particular country will be important benchmarks. To help
the international community achieve uniform quality in their meteorological data acquisition
and processing, WMO recommends the use of its own classification of personnel with the
accompanying duties that they should be expected to carry out competently.
The Guide to the Implementation of Education and Training Standards in Meteorology and Hydrology
(WMO‑No. 1083) identifies two broad categories of personnel: professionals and technicians.
For meteorological and hydrological personnel, these categories are designated as meteorologist
and meteorological technician, and hydrologist and hydrological technician, respectively. The
recommended learning outcomes for each classification includes a substantial component on
instruments and methods of observation related to the education, training and duties expected
at that level. The WMO classification of personnel also sets guidelines for the qualifications
for instrument specialists, including detailed learning outcomes for the initial training and
specialization of meteorological personnel. These guidelines enable syllabi and training courses
to be properly designed and interpreted; they also assist in the definition of skill deficits and aid
the development of balanced national technical skill resources.
CHAPTER 5. TRAINING OF INSTRUMENT SPECIALISTS 79
Training needs should reflect the evolving needs of NMHSs. As NMHSs transition to new
systems and infrastructure, the technical skill requirement must also adapt to these changes.
An example is when NMHSs transition from traditional manual observations to AWSs, or when
NMHSs adopt acoustic methods for discharge measurement. These needs should be included in
management policy.
It is important that NMHSs have a personnel plan that includes instrument specialists,
recognizing their value in the planning, development and maintenance of adequate and
cost‑effective weather observing programmes. The personnel plan should show all specialist
instrument personnel at graded levels of qualification (Guide to the Implementation of Education
and Training Standards in Meteorology and Hydrology (WMO‑No. 1083)). Skill deficits should be
identified, and provision made for recruitment and training. The WMO competency frameworks
(Annexes 5.A to 5.D) will help in refining personnel plans. Quality management systems are also
now recommended for all services, and quality systems are required under the WMO Technical
Regulations (WMO‑No. 49), Volume II.
Every effort should be made to retain scarce instrumentation technical skills by providing a work
environment that is technically challenging, has opportunities for career advancement, and has
salaries comparable with those of other technical skills, both within and outside the NMHS.
Training should be an integral part of the personnel plan. The introduction of new technology
and re‑equipment imply new skill requirements. New recruits will need training appropriate to
their previous experience, and skill deficits can also be made up by enhancing the skills of other
staff. This training also provides the path for career progression. It is helpful if each staff member
has a career profile showing training, qualifications and career progression, maintained by the
training department, to plan personnel development in an orderly manner.
80 GUIDE TO INSTRUMENTS AND METHODS OF OBSERVATION - VOLUME V
National training programmes should aim at a balance of skills over all specialist classes and
job responsibilities (as described in the competency frameworks, Annexes 5.A to 5.D), giving
due attention to the initial, supplemental and refresher phases of training, and which result in a
self‑sustaining technical infrastructure.
To achieve maximum benefits from training it is essential to have clear aims and specific
objectives on which to base training plans, syllabi and expenditure. The following strategic aims
and objectives for the training of instrument specialists may be considered.
(a) To improve and maintain the quality of information in all meteorological observing
programmes, including consideration of sustainability and life‑cycle management;
(b) To enable NMHSs to become self‑reliant in the knowledge and skills required for the
effective planning, implementation and operation of meteorological data‑acquisition
programmes, and to enable them to develop maintenance services ensuring maximum
reliability, accuracy and economy from instrumentation systems;
(c) To realize fully the value of capital invested in instrumentation systems over their optimum
economic life;
(d) To plan transition activities that enable scalability and implementation of future
investments.
A set of competency requirements has been developed by WMO for education and training
providers for meteorological, hydrological and climate services (Guidelines for Trainers in
Meteorological, Hydrological and Climate Services (WMO‑No. 1114)). This framework describes the
following job responsibilities as competency units:
(a) Analyse the organizational context and manage the training processes;
Fulfilment of each of these competencies will help to provide balanced programmes of training
that meet the defined needs of the countries within each region for skills at all levels; ensure
effective knowledge and skill development in NMHSs by using appropriately qualified tutors,
good learning resources and facilities, and effective learning methods; provide for monitoring
the effectiveness of training by appropriate assessment and reporting procedures; and help to
provide effective training within given constraints. See 5.4 for a more detailed description of
these competency areas.
The general goal of training instrument specialists is to develop the competencies (skills,
knowledge and behaviour) required for successful service delivery. The WMO competency
frameworks for meteorological observations, instrumentation, calibration, and observing
programme and network management were developed to this end. For detailed descriptions of
each of these frameworks, see Annexes 5.A to 5.D.
Meteorological and hydrological data acquisition is a complex and costly activity involving
human and material resources, communication and computation. It is necessary to maximize the
benefit of the information derived while minimizing the financial and human resources required
in this endeavour.
The aim of quality data acquisition is to maintain the flow of representative, accurate and timely
instrumental data into the national meteorological processing centres at the least cost. Through
every stage of technical training, a broad appreciation of how all staff can affect the quality of
the end product should be encouraged. The discipline of total quality management (see Guide
to the Implementation of Quality Management Systems for National Meteorological and Hydrological
Services and Other Relevant Service Providers (WMO‑No. 1100)) considers the whole measurement
environment (applications, procedures, instruments and personnel) in so far as each of its
elements may affect quality. In total quality management, the data‑acquisition activity is studied
as a system or series of processes. Critical elements of each process – for example, time delay –
are measured and the variation in the process is defined statistically. Problem‑solving tools are
used by a small team of people who understand the process, to reduce process variation and
thereby improve quality. Processes are continuously refined by incremental improvement.
All of the above influence data quality from the instrument expert’s point of view. The checklist
can be used by managers to examine areas over which they have control to identify points
of weakness, by training staff during courses on total quality management concepts, and by
individuals to help them be aware of areas where their knowledge and skill should make a
valuable contribution to overall data quality.
82 GUIDE TO INSTRUMENTS AND METHODS OF OBSERVATION - VOLUME V
The International Organization for Standardization provides for formal quality systems, defined
by the ISO 9000 family of standards, under which organizations may be certified by external
auditors for the quality of their production processes and services to clients. These quality
systems depend heavily on training in quality management techniques.
Trainers will want to review guidance on quality management of competency assessment and
training, as discussed in the Guide to Competency (WMO‑No. 1205), Part III.
Learning is a process that is very personal and depends on an individual’s needs and interests.
People are motivated to learn when there is the prospect of some reward, for example, a salary
increase. However, research shows that other rewards, such as job satisfaction, involvement,
personal fulfilment, having some sense of power or influence, and the affirmation of peers and
superiors are at least equally strong, if not stronger, motivators. These rewards come through
enhanced work performance and relationships with others on the job.
Learning is an active process in which the learner reacts to the training environment and activity.
A change of behaviour occurs as the learner is involved mentally, physically and emotionally.
Trainers and managers should attempt to stimulate and encourage learning by creating a
conducive physical and psychological climate and by providing appropriate experiences,
methods and practical examples that promote learning. Students should feel at ease and
be comfortable in the learning environment, which should not provide distractions. The
“psychological climate” can be affected by the student’s motivation, the presentation style of
the trainer and learning resources, the affirmation of previously acquired knowledge, a work
environment free of embarrassment and ridicule, the establishment of an atmosphere of trust,
and the selection of learning activities.
(a) Readiness: Learning will take place more quickly and be more effective if the student is
ready, interested and wants to learn.
(b) Objectives: The learning objectives (including those related to competency standards)
should be clear to trainers and learners, and assessable to ensure they have been achieved.
(c) Active engagement: Learning is more effective if students actively solve problems
independently, rather than being passively supplied with answers or merely
demonstrated a skill.
(d) Association or relevance: Learning should be related to current job experiences, noting
similarities and differences to current practices.
(e) Formative evaluation: Learning should be confirmed by periodic practice or testing and
feedback. Learning that is distributed over several short sessions, each ending in evaluation
or practice, will be more effective than one long session.
(f) Practice or reinforcement: Practical exercises and repetition will help instil learning.
(g) Immediacy: Telling of intense, vivid or personal experiences capture the imagination and
may increase attention, relevance, and impact.
CHAPTER 5. TRAINING OF INSTRUMENT SPECIALISTS 83
(h) Efficacy: Learning experiences that are challenging but allow for success are more
satisfying and better for learning than those that might too easily lead to failure and create
embarrassment. Receiving approval encourages learning.
(i) Ongoing support: The trainee’s supervisor must be fully supportive of the training and
must be able to maintain and reinforce it.
(j) Planning and evaluation: Training should be planned, carried out and evaluated
systematically, in the context of organizational needs.
Refer to Guidelines for Trainers in Meteorological, Hydrological and Climate Services (WMO‑No. 1114)
for additional principles and to the WMO Education and Training Programme (for additional
guidance on many training topics.
People in a group will learn at different speeds. Some training methods will suit some individuals
better than others and will be more effective under different circumstances. Using a variety of
training methods and activities will likely increase attention as well. Using a variety of training
methods and resources will more likely help a diverse group learn well.
Training for instrument specialists can take advantage of a wide range of methods and media.
The theoretical aspects of measurement and instrument design may be taught via lectures or
video and supported by graphs and diagrams. A working knowledge of the instrument system
for operation, maintenance and calibration can be gained by the use of illustrated text; films,
videos or in‑person demonstrations; physical models that can be disassembled and assembled
for practice; and ultimately practical experiences in operating systems and making observations.
Unsafe practices or modes of use may be simulated.
A meteorological instrument systems engineering group needs people who are not only
technically capable, but who are broadly educated to support the development of a wide range
of core competencies shared by other professionals. This includes being able to speak and write
well, to work collaboratively in teams, to manage tasks and projects efficiently, to use computer
technologies effectively, and to use good decision‑making processes. Skilled technologists
should receive training so that they can play a wider role in the decisions that affect the
development of their NMHS.
Good personal communication skills are necessary to work collaboratively and to support and
justify technical programmes, particularly in management positions. Some staff may need to
improve communication skills, and may benefit from courses in public speaking, negotiation,
letter and report writing or assertiveness training. Some staff may need assistance in learning
another language to further their training.
Throughout their working lives, instrument specialists should expect to be engaged in repeated
cycles of personal training, both through structured study and informal on‑the‑job training or
self‑study. Three phases of training can be recognized as follows:
(a) A developmental, initial training phase when the trainee acquires general theory and
practice as qualifications at various levels (see the Guide to the Implementation of Education
and Training Standards in Meteorology and Hydrology (WMO‑No. 1083);
84 GUIDE TO INSTRUMENTS AND METHODS OF OBSERVATION - VOLUME V
(b) A supplementation phase, or specialist training, where the training is enhanced by learning
about specific techniques and equipment (see Annexes 5.A to 5.D);
(c) A refresher training phase, where some years after formal training the specialist needs
refresher training and updates on new techniques and equipment.
For instrument specialists, the initial training phase of technical education and training usually
occurs partly in an external technical institute and partly in the training establishment of the
NMHS where a basic course in meteorological instruments is taken. Note that technical or
engineering education may extend over all WMO classification levels.
The supplementation phase will occur over several years as the specialist takes courses on
special systems, for example, instrument maintenance and calibration of land and marine
AWSs, hydrological instruments and remote sensing systems, or in disciplines such as computer
software applications or management skills. For specialist training, increasing use will be made of
external training resources, including WMO‑sponsored training opportunities.
As the instrument specialist’s career progresses there will be a need for periodic refresher courses
to cover advances in instrumentation and technology, as well as for other supplementary courses
– in core competency areas, for example.
There is an implied progression in these phases. Each training course will assume that students
have some prerequisite knowledge upon which to build.
Most instrument specialists find themselves in the important and satisfying role of trainer from
time to time, and for some it will become their full‑time work with its own field of expertise. All
trainers need to develop competencies to become good trainers.
A good trainer is concerned with quality results, is highly knowledgeable in specified fields, and
has good communication skills. He or she will demonstrate empathy with students, and will be
patient and tolerant, ready to give encouragement and praise, flexible and imaginative, and
practised in a variety of training techniques.
Good trainers will set clear objectives and plan and prepare training sessions well. They will
maintain careful records of training methods, syllabi, course notes, courses held and the results,
and of budgets and expenditures. They will seek honest feedback on their performance and
be ready to modify their approach. They will also expect to be learning themselves throughout
their careers.
The Guidelines for Trainers in Meteorological, Hydrological and Climate Services (WMO‑No. 1114)
provide a more detailed treatment of the required competencies of trainers. These competencies
describe the training process and are outlined more succinctly below.
CHAPTER 5. TRAINING OF INSTRUMENT SPECIALISTS 85
5.4.2 Analyse the organizational context and manage the training processes
To ensure that training is implemented in ways that will lead to the success of the instrument
specialists in the organization, the organizational context must be continually analysed,
and training plans, policies and processes must be developed, monitored and updated for
effectiveness.
This competency will primarily be the responsibility of senior staff members who have overall
responsibility for training, training managers, people who make decisions about overall human
resource development strategies, and all trainers who would benefit from having increased
awareness of the context in which they are operating.
Training should be conducted in full awareness of the current and evolving organizational and
learning contexts, taking into account organizational requirements, how human resources
are made available and applied, how strategic training plans are developed, and how training
procedures are implemented to comply with organizational and training plans, policies and
processes. It can be beneficial to develop and implement both strategic and operational training
plans. When implemented, training plans, policies and processes should be monitored and
updated to address evolving needs and technological advances.
To carry out these responsibilities, the staff involved must be able to understand the factors that
can cause change within an organization, including political, economic, social and technological
factors. They must also be able to develop and implement plans, policies and processes, know
which technologies are required to support training, and be able to apply quality assurance
methods, financial management, and marketing principles to promote training. Finally,
responsible staff should recognize and respond to organizational, technological and research
trends regarding training practices.
Training professionals should use systematic methods for identifying organizational and
individual learning needs, and to specify these as the learning outcomes required of training –
and what needs to be assessed after training.
Training needs assessment is the process of determining when and what training is required.
Needs assessment should be a first step before making any training decision.
Learning needs assessment often begins with task analysis. The instrument specialist must be
trained to carry out many repetitive or complex tasks for the installation, maintenance and
calibration of instruments, and sometimes troubleshooting for manufacturing them. A task
analysis checklist may be used to define the way in which the job is to be done, and could be
used by the tutor in training and then as a checklist by the trainee. First, the objective of the job
and the required standard of performance are written down. The job is broken down into logical
steps or stages of a convenient size. The form might consist of a table whose columns are headed,
for example, with “steps”, “methods”, “measures”, and “reasons”:
(a) Steps (what must be done): These are numbered and consist of a brief description of each
step of the task, beginning with an active verb;
(b) Methods (how it is to be done): An indication of the method and equipment to be used or
the skill required;
(d) Reasons (why it must be done): A brief explanation of the purpose of each step.
86 GUIDE TO INSTRUMENTS AND METHODS OF OBSERVATION - VOLUME V
A flow chart would be a good visual means of relating the steps to the whole task, particularly
when the order of the steps is important or if there are branches in the procedure. A simple
example is shown in Figure 5.1.
Learning needs to be expressed in terms of learning outcomes, which in turn describe what
needs to be assessed when training is completed (see 5.4.6). Well‑written learning outcomes for
professional training (specialist and refresher training) should describe learning in terms of what
a learner should be able to do following the learning experience, not just what they should know
or understand. This helps to ensure a direct connection to required job competencies and job
tasks, which provides the justification for training. However, even for initial training, which may
include as much theory as practice, learning outcomes that use action verbs (“apply”, “perform”,
“demonstrate”, “analyse”, “solve”, and the like, rather than “know” or “understand”) will help in
deciding what to teach and how to assess learning.
Professionals learn their skills in a wide variety of ways, both formally and informally. Learning
solutions is a term we employ to describe the modes of learning used (for example, classroom
or online learning) and the structures in which learning takes place (for example, a course,
self‑directed study, on‑the‑job mentoring or coaching). Once the learning outcomes required are
known, the next step in planning is to decide which learning solutions should be used. Trainers
should resist the temptation to jump to a quick solution, and instead examine the needs and
constraints to come up with the best solution or solutions possible.
Each of the following learning solutions can be effective if chosen for the proper learning
outcomes and organizational abilities and constraints.
Formal solutions:
(c) Online distance learning courses made up mostly of live presentations or webinars;
(d) Online distance learning courses that are guided by a remote trainer or partially
self‑directed, and may utilize offline materials as well.
(a) On‑the‑job training; job practice under the guidance of an experienced person: this form
can be highly effective for instrument specialists, who may need extensive hands‑on
practice with authentic equipment; however, on‑the‑job training may not teach or assess
theoretical background knowledge sufficiently;
(b) Coaching and mentoring, in which a more experienced person provides either intensive
guidance for a brief period or periodic guidance over an extended period of time;
(c) Short online seminars or webinars, from less than one hour to one day;
(e) Self‑directed learning, in which the learner accesses information and learning resources,
such as online or computer‑based tutorials, videos or interactive simulations, which have
been assigned or under the learner’s own initiative;
(f) Job rotation or secondment, skill expansion through brief assignments in different jobs, or a
longer but fixed‑term assignment to gain additional work experience;
CHAPTER 5. TRAINING OF INSTRUMENT SPECIALISTS 87
Do a desktop survey
of the current Fit Do
installation the instrument onto a wiring schematic
the support pole
Prepare/review
the administrative
tasks Connect Update
the data cable the asset
to the junction box database
Ensure
the mains power
is disconnected Connect
the power cables
END
to the instrument
Prepare
the support
frame
Turn on
the instrument
Install
the power supply
onto the upstand
Configure
the instrument
Remove
Yes
the wetness Is the wetness
sensor power sensor fitted
and data cables Confirm and verify
correct data
are received
No by the local logger
Disconnect
Remove the instrument Speak
the wetness mains power to the Network
sensor power from the supply Manager to confirm
supply and data are received
sensor by the main system
Disconnect
the instrument data
cables from
the junction box Clean the site and
remove any debris
(FOD)
Remove
the old instrument
Take photographs:
*Whole installation
*Wiring details
(in & out)
Figure 5.1. Process for removal and installation of a new present weather sensor
88 GUIDE TO INSTRUMENTS AND METHODS OF OBSERVATION - VOLUME V
(g) A job manual or documented instructions (using printed or online resources for self‑help
on the job);
(h) Learning from colleagues (during office or off‑the‑job discussions, or via an online
community, sometimes through formal or informal communities of practice, including
online discussion forums or blogs);
(i) Working in teams, for example with peers or more experienced colleagues;
(j) Working independently, but under close supervision (as a trained, but still new employee).
Often the best choices for learning are blended solutions, combinations of the above or
variations on them.
Once the learning objectives are specified and the learning solution or solutions are chosen,
trainers must plan the training and design the learning activities and resources that will be
included. This must be done based on established learning theory and a firm knowledge of the
participating learners. Learners in universities and technical schools may have different needs
and preferences than professionals requiring refresher training. For example, workplace learners
will likely want to understand the immediate benefits of the training for their work and want to
reach the learning outcomes more quickly. Trainers must also assess the current skill level of the
learners, and especially which students may need special attention.
Designing a training event or other learning solution begins with knowing what learning
outcomes are required, and how to help learners achieve them. Trainers will want to consider
the strengths and limitations of the learning activities that might be used. In general, trainers
will need to know how to create learning activities that include authentic tasks, and provide
opportunities for practising the required skills. But they will also need to be able to prepare
presentations and learning resources and choose the tools, technology and software required
for learning.
Learning activities should be offered in a logical sequence, and provide variety and practice. The
sequence must also be efficient. Active learning approaches not only provide opportunities for
practice, but for assessment and feedback, which is just as critical during training as at the end.
The following list is a sample of the range of learning activities available. They can be mixed and
merged to create many variations of training events:
(a) Lectures: When thorough theoretical coverage is needed, a lecture can be the most direct
method. However, lectures are most effective when short, well structured and followed by
more active approaches. Lectures can be kept active also by interspersing questions and
discussions.
(b) Demonstrations: Rather than simply describing via a lecture, it is much more effective
to demonstrate complex technical skills, whether in the classroom, laboratory, or work
situation. Demonstrations are critical for the initial teaching of manual maintenance and
calibration procedures, for example. Demonstrations are best if followed by opportunities
to practice and ask questions.
(c) Field studies: The opportunity to observe practices or new instruments in the field
environment is useful for teaching installation, maintenance or calibration.
(d) Questions and issues: Rather than in the form of a lecture, instruction can be provided
around questions or issues that encourage students to think critically and solve problems.
CHAPTER 5. TRAINING OF INSTRUMENT SPECIALISTS 89
(e) Learner‑centred discussions: Instead of only teacher‑led question and answer sessions
that might follow a lecture, letting students answer each other’s questions and guide the
direction of discussion can make learners more animated and feel more responsible.
(f) Small group discussions: Break students into small discussion groups to encourage more
contributions by each person and to bring out more diversity of opinion.
(h) Practice exercises: Create sets of practice exercises, such as lab exercises, that require the
application of skills to be learned.
(i) Projects: Engage learners in real‑world tasks and challenges. For informal learning
situations, these might include actual job tasks, internships, apprenticeships, or some other
work. In formal situations, projects might include research, report writing, data gathering
and statistical analysis, making a presentation, or creating a local application or case study.
A well‑designed training event still requires consistent delivery to become successful. This means
offering training in an environment that fosters and sustains learning through involvement,
effective communication, and paying careful attention to the learners.
Good training delivery begins by ensuring that learning activities are engaging and well
organized so they proceed smoothly. Trainers should clearly communicate the purpose and
expected outcomes of learning activities, and create a supportive environment that is open to the
input of learners, and encourages them to ask questions freely and share concerns. Trainers must
develop mutual trust and respect between the themselves and the learners, as well as between
learners. Trainers need to know how to be good listeners, and also know how to ask probing
questions and provide effective feedback. At times they may need to mitigate disruptions
and conflict.
Finally, they need to have the technical skills to apply technologies that will be used during
training, both the instruments to be understood and the training tools, such as computers and
presentation technologies.
With limited resources available for training, real effort should be devoted to maximizing its
effectiveness. Training courses and resources should be dedicated to optimizing the benefits
of training the right personnel at the most useful time. For example, too little training may
be a waste of resources, sending management staff to a course for maintenance technicians
would be inappropriate, and it is pointless to train people 12 months before they have access to
new technology.
Training opportunities and methods should be selected to best suit knowledge and skill
requirements and trainees, bearing in mind their educational and national backgrounds.
To ensure maximum effectiveness, training should be evaluated.
90 GUIDE TO INSTRUMENTS AND METHODS OF OBSERVATION - VOLUME V
Many trainers would say that assessment is the part of training about which they are least
confident. Assessment is stressful for both trainers and learners. However, it is an essential part of
learning. Without it, learners do not know how well they are learning, and trainers do not know if
their training is successful.
In some ways, learning assessment is simple. What needs to be assessed is actually determined
right from the beginning when the required learning outcomes are decided. If the learning
outcomes have been well defined, then the trainer knows what needs to be assessed.
What is difficult is finding effective and practical ways to assess job tasks in a training
environment. It is hard to recreate realistic conditions outside the job environment. However, this
can be approximated through exercises that use standard work equipment and real data.
Job competencies are best assessed on the job, particularly if the assessment has implications
for the certification of the person to perform that job. However, job tasks are composed of many
smaller actions and based on a large amount of background knowledge, and simpler methods
can assess these smaller tasks and background knowledge to make a contribution to a more
complete assessment of how someone will be able to perform.
A variety of learning assessment methods might be used: quizzes, projects or reports, problem
solving and exercises, observations of tasks, peer and self‑assessment, and the like. Nearly any
active learning approach, if well observed, can also become an effective assessment method.
Skills are best tested by observation during performance of the learned task in a realistic
environment. A checklist of required actions and skills (an observation form) for the task may be
used by the assessor.
(a) WMO, which is concerned with improving the quality of data obtained from the Global
Observing System. It generates training programmes, establishes funds and uses the
services of experts primarily to improve the skill base in developing countries;
(b) An NMHS, which needs quality weather data and is concerned with the overall capability
of the division that performs data acquisition and particular instrumentation tasks
within certain staff number constraints. It is interested in the budget and cost–benefit of
training programmes;
(c) A training department or RTC, which is concerned with establishing training programmes
to meet specified objectives within an agreed budget; its trainers need to know how
effective their methods are in meeting these objectives and how they can be improved;
(d) Engineering managers, who are concerned with having the work skills to accomplish their
area of responsibility to the required standard and without wasting time or materials;
(e) Trainees, who are concerned with the rewards and job satisfaction that come with increased
competence; they will want a training course to meet their needs and expectations.
Thus, the effectiveness of training should be evaluated at several levels. National and Regional
Training Centres might evaluate their programmes annually and regularly at longer intervals
(every 2–5 years), comparing the number of trainees in different courses and pass levels against
budgets and the objectives which have been set at the start of each period. Trainers will need to
evaluate the relevance and effectiveness of the content and presentation of their courses.
CHAPTER 5. TRAINING OF INSTRUMENT SPECIALISTS 91
(a) A training report, which does not attempt to measure effectiveness. Instead, it is a factual
statement of, for example, the type and the number of courses offered, dates and durations,
the number of trainees trained and qualifying, and the total cost of training. In some
situations, a report is required on the assessed capability of the student.
(b) Reaction evaluation, which measures the reaction of the trainees to the training
programme. It may take the form of a written questionnaire through which trainees share,
at the end of the course, their opinions about relevance, content, methods, training aids,
presentation and administration. As such, this method cannot immediately improve the
training that they are receiving. Therefore, every training course should also have regular
opportunities for review and student feedback through group discussion. This enables
the trainer to detect any problems with the training or any individual’s needs and to take
appropriate action.
(c) Learning assessment, which measures the trainee’s new knowledge and skills, is obviously
a measure of the training effectiveness and helpful for the trainee as well (see also 5.4.7.2).
Assessment provides more information when it is compared to a pre‑training test. Various
forms of written test (essay; short‑answer, true or false, or multiple‑choice questions;
drawing a diagram or flow chart) can be devised to test a trainee’s knowledge. Trainees
may also usefully test and score their own knowledge.
(d) Performance evaluation, which measures how the trainee’s performance on the job has
changed after some period of time in response to training, which is best compared with a
pre‑training test. This evaluation may be carried out by the employer at least six weeks after
training, using an observation form, for example. The training institution may also make an
assessment by sending questionnaires to both the employer and the trainee.
(e) Impact evaluation, which measures the effectiveness of training by determining the change
in an organization or work group. This evaluation may require planning and the collection
of baseline data before and after the specific training. Some measures might be: bad data
and the number of data elements missing in meteorological reports, the time taken to
perform installations, and the cost of installations.
Some options are: personal study; short courses (including teaching skills) run by technical
institutes; time off from regular work duties to study for higher qualifications; visits to the
factories of meteorological equipment manufacturers; visits and secondments to other NMHSs
and RICs; and attendance at WMO and other training centres and at technical conferences.
Trainers and managers should be aware of the sources of information and guidance available
to them; the external training opportunities that are available; the training institutions that
can complement their own work; and, not least, the financial resources that support all
training activities.
92 GUIDE TO INSTRUMENTS AND METHODS OF OBSERVATION - VOLUME V
In general, NMHSs will be unable to provide the full range of technical education and training
required by their instrument specialists, and so will have varying degrees of dependence on
external educational institutions for training, including supplementary and refresher training in
advanced technology. Meteorological and hydrological engineering managers will need to be
knowledgeable about the curricula offered by their national institutions so that they can advise
their staff on suitable education and training courses. The Guide to the Implementation of Education
and Training Standards in Meteorology and Hydrology (WMO-No. 1083) gives guidance on the
syllabi necessary for the different classes of instrument specialists.
When instrument specialists are recruited from outside the NMHS to take advantage of
well‑developed engineering skills, it is desirable that they have qualifications from a recognized
national institution. They will then require further training in meteorology and its specific
measurement techniques and instrumentation.
RICs have been established to maintain standards and provide advice to Members. RICs
are intended to be centres of expertise on instrument types, characteristics, performance,
application and calibration. They should have a technical library on instrument science and
practice; laboratory space and demonstration equipment; and should maintain a set of standard
instruments with calibrations traceable to international standards. They should be able to offer
information, advice and assistance to Members in their Region and beyond.
Where possible, these centres should combine with a Regional Radiation Centre and should be
located within or near an RTC to share expertise and resources.
A particular role of RICs is to assist in organizing regional training seminars or workshops on the
maintenance, comparison and calibration of meteorological instruments and to provide facilities
and expert advisers.
RICs should aim to sponsor the best teaching methods and provide access to training resources
and media that may be beyond the resources of NMHSs. The centres should provide refresher
training for their own experts in the latest technology available and training methods in order to
maintain their capability.
1
Recommended by JCOMM at its third session (2009) through Recommendation 1 (JCOMM‑III).
2
Information on RMICs is available in the present Guide, Volume III, Chapter 4, Annex 4.A.
CHAPTER 5. TRAINING OF INSTRUMENT SPECIALISTS 93
RMICs should assist in organizing regional training seminars or workshops on the maintenance,
comparison and calibration of marine meteorological and oceanographic instruments and
provide facilities and expert advisors.
RMICs should aim to sponsor the best teaching methods and provide access to training resources
and media. To maintain their capability the centres should arrange refresher training for their
own experts in training methods and the latest technology available.
Periodic surveys are conducted by WMO of training needs by Region, class and meteorological
specialization. The findings guide the distribution and kind of training events sponsored by
WMO over a four‑year period. It is important that Member countries include a comprehensive
assessment of their need for instrument specialists so that WMO training can reflect true needs.
These publications include useful information for instrument specialists and their managers
(particularly Developing Meteorological and Hydrological Services through WMO Education and
Training Opportunities and A Compendium of Topics to Support Management Development in
Meteorological Services).
The WMO Education and Training Office maintains WMO Global Campus web portal (https://
learningevents.wmo.int/#/) which offers links to tools that provide access to information on
events and learning resources in all areas of interest to WMO Members.
The managers of engineering groups should ensure that they are aware of technical training
opportunities announced by WMO by maintaining contact with the WMO Education and
Training Office and with the person in their organization who receives correspondence
concerning the following:
(a) Travelling experts/roving seminars/workshops: From time to time, WMO arranges for an
expert, or a group of experts, to conduct a specified training course, seminar or workshop
in several Member countries, usually in the same Region. Alternatively, experts may
conduct the training event at an RIC or RTC and students in the region will travel to the
centre. The objective is to make the best expertise available at the lowest overall cost,
bearing in mind the local situation.
(b) Fellowships: WMO provides training fellowships under its Technical Cooperation
Programme. Funding comes from several sources, including the United Nations
Development Programme, the Voluntary Cooperation Programme, WMO trust funds,
the regular budget of WMO and other bilateral assistance programmes. Short‑term
(less than 12 months) or long‑term (several years) fellowships are for studies or training
at universities, training institutes, or especially at WMO RTCs, and can come under the
categories of university degree courses, postgraduate studies, non‑degree tertiary studies,
specialized training courses, on‑the‑job training, and technical training for the operation
and maintenance of equipment. Applications cannot be accepted directly from individuals
but must be endorsed by the Permanent Representative with WMO of the candidate’s
country. A clear definition must be given of the training required and priorities. Given that it
takes an average of eight months to organize a candidate’s training programme because of
the complex consultations between the Secretariat and the donor and recipient countries,
applications are required well in advance of the proposed training period. This is only a
summary of the conditions. Full information and nomination forms are available from the
WMO Secretariat. Conditions are stringent and complete documentation of applications
is required.
Other than WMO fellowships, agencies in some countries offer excellent training programmes
that may be tailored to the needs of the candidate. Instrument specialists should enquire about
these opportunities with the country or agency representative in their own country.
There are several institutions located around the world that provide training services. Training
services include, but are not limited to, providing subject matter experts, developing customized
training resources and delivering customized training to fill critical knowledge gaps at NMHSs.
Customized training resources could include online learning modules, training videos and/or
interactive practical simulations. Customized training could include competency‑based
e‑learning, blended learning and/or classroom instruction. Training delivery could include
case‑based and conceptual lessons, simulated event‑based scenarios, video‑based instruction,
recorded lectures, live webinars, web‑based distance learning courses and residence courses
provided at NMHSs or training institutions. International training institutions have the flexibility
and access to expert trainer resources to customize and adapt training resource offerings to meet
the needs of NMHSs.
CHAPTER 5. TRAINING OF INSTRUMENT SPECIALISTS 95
(a) New data‑acquisition system purchase: All contracts for the supply of major
data‑acquisition systems (including donor‑funded programmes) should include
an adequate allowance for the training of local personnel in system operation and
maintenance. The recipient NMHS representatives should have a good understanding of
the training offered and should be able to negotiate in view of their requirements. While
training for a new system is usually given at the commissioning stage, it is useful to allow
for a further session after six months of operational experience or when a significant
maintenance problem emerges.
A bilateral training opportunity arises when a country installs and commissions a major
instrumentation system and trainees can be invited from another country to observe and
assist in the installation.
When international programmes, such as the World Climate Programme, the Atmospheric
Research and Environment Programme, or the Tropical Cyclone Programme, conduct large‑scale
experiments, there may be opportunities for local instrument specialists to be associated with
senior colleagues in the measurement programme and to thereby gain valuable experience.
If they can be associated with these exercises, instrument specialists will benefit from
involvement in some of the following activities: experimental design, instrument exposure,
operational techniques, data sampling, data acquisition, data processing, analysis and
interpretation of results. If such intercomparisons can be conducted at RICs, the possibility of
running a parallel special training course might be explored.
5.5.4.1 Cost‑effectiveness
Substantial costs are involved in training activities, and resources are always likely to be limited.
Therefore, it is necessary that the costs of various training options should be identified and
compared, and that the cost‑effectiveness of all training activities should be monitored, and
appropriate decisions taken. Overall, the investment in training by the NMHS must be seen to be
of value to the organization.
Costs may be divided into the direct costs of operating certain training courses and the indirect
or overhead costs of providing the training facility. Each training activity could be assigned some
proportion of the overhead costs as well as the direct operating costs. If the facilities are used by
many activities throughout the year, the indirect cost apportioned to any one activity will be low
and the facility will be used efficiently.
Direct operating costs may include trainee and instructor travel, accommodation, meals and
daily expenses, course and tutor fees, WMO staff costs, student notes and specific course
consumables, and trainee time away from work.
Indirect or overhead costs could include those relating to training centre buildings (classrooms,
workshops and laboratories), equipment and running costs, teaching and administration
staff salaries, WMO administration overheads, the cost of producing course materials (new
course design, background notes, audiovisual materials), and general consumables used
during training.
In general, overall costs for the various modes of training may be roughly ranked from the lowest
to the highest as follows (depending on the efficiency of resource use):
(b) Online learning courses and webinars (development costs may vary);
(f) Interactive online learning modules (high initial production cost, but low cost over
the lifecycle);
The provision of the meteorological observations function within an NMHS or related agencies
may be accomplished by a variety of skilled personnel, including meteorologists, climatologists,
geographers, meteorological instrument technicians and meteorological technicians. It can also
be accomplished by a range of other people not directly within the sphere of the NMHS, such
as farmers, police, clerical workers, or private citizens. Third‑party (for example, universities,
international and regional institutions and research centres) and private‑sector organizations
might also contribute to this function.
This annex sets out a competency framework for personnel (primarily professional
meteorological observers) involved in the provision of the meteorological observations
function, but it is not necessary that each person has the full set of competencies as set out
in the framework. However, within specific application conditions (as set out below), which
might be different for each organization or region, it is expected that any institution providing
meteorological observation services will have staff members somewhere within the organization
who together demonstrate all the competencies. The performance components as well as the
knowledge and skill requirements that support the competencies should be customized based
on the particular context of an organization. However, the general criteria and requirements
provided here will apply in most circumstances.
Application conditions
The application of the competency framework will depend on the following circumstances,
which will be different for each organization:
(b) The way in which internal and external personnel are used to provide meteorological
observation services;
(c) The available resources and capabilities (financial, human, technological, and facilities), and
organizational structures, policies and procedures;
Competency description
Appraise meteorological conditions to identify the significant and evolving situation that is
affecting or will likely affect the area of responsibility throughout the watch period.
Performance components
(b) Understand the potential influence of the evolving meteorological situation on subsequent
observations;
(c) Identify meteorological symptoms that may lead to the onset of significant weather.
(b) Identification of clouds and other meteors using the International Cloud Atlas: Manual on the
Observation of Clouds and Other Meteors (WMO-No. 407) as guidance;
(e) Standard operating procedures (SOPs) and prescribed practices for monitoring
weather conditions.
CHAPTER 5. TRAINING OF INSTRUMENT SPECIALISTS 99
Competency description
Perform surface observations of meteorological variables and phenomena, and their significant
changes, according to prescribed practices.
Performance components
– Precipitation
– Atmospheric pressure
– Temperature
– Humidity
– Wind
– Cloud
– Visibility
– Solar radiation
– Sunshine duration
– Evaporation
– Soil temperature
– Other specialized observations as required (for example, soil moisture, sea state,
atmospheric composition, wind shear, leaf wetness, phenology)
(b) Encode and transmit surface observations using prescribed codes and methods.
(b) Cloud classification as defined in the International Cloud Atlas: Manual on the Observation of
Clouds and Other Meteors (WMO-No. 407);
(h) Use of meteorological codes to record observations (for example, according to the Manual
on the Global Data-processing and Forecasting System (WMO-No. 485) and the Manual on
Codes (WMO-No. 306), Volumes I.1, I.2, I.3 and II).
Competency description
Performance components
– Balloon release;
(d) Encode and transmit upper‑air observations using prescribed codes and methods.
Competency description
Make observations utilizing remote‑sensing technology, for example, satellite, weather radar,
radar wind profiler, wind lidar, ceilometer, microwave radiometer, lightning detection system,
and the like.
Performance components
(a) Interpret information derived from remote‑sensing technology in making observations (for
example, ceilometer for cloud base height in synoptic observations and meteorological
aerodrome reports);
(b) Cross‑check observations obtained from alternative observing techniques (for example,
remote sensing versus in situ measurements) to ensure consistency (for example, compare
visibility information recorded by visibility meters with satellite imagery (fog, sandstorms)
and manual observations).
(a) Understanding of the physical principles of operation, the particular technical configuration
and the limitations of surface‑based and space‑based remote‑sensing technology being
utilized (for example, weather radar, wind lidar, ceilometer, lightning detection system,
radar wind profiler, microwave radiometer);
(b) Knowledge of the use of different meteorological and oceanographic information derived
from remote‑sensing technology (for example, imagery from different channels of satellites,
wind field from Doppler weather radars).
Competency description
Performance components
(a) Regularly inspect meteorological instruments (for example, raingauges, wet bulb
thermometers), automated observing systems (for example, AWS, weather radar fault
status), communications systems and backup systems (for example, power);
(b) Conduct routine maintenance tasks as prescribed (for example, change wet bulb wick or
recorder charts, clean pyranometer dome or ceilometer window);
1
See also competency 2 in instrumentation competencies, Annex 5.B.
102 GUIDE TO INSTRUMENTS AND METHODS OF OBSERVATION - VOLUME V
(a) SOPs and prescribed practices for carrying out inspection of instruments and
communications systems, and the like;
(b) Accuracy requirements for instrumentation and measurements (for example, as specified
in the present Guide and other WMO or International Civil Aviation Organization (ICAO)
regulatory and guidance materials);
(g) Hazard awareness in the vicinity of instruments and communications systems (for example,
near electrical cables, working at heights, electromagnetic radiation);
(h) Prescribed contingency plans (for example, failure of power and communications systems,
damage to infrastructure during severe weather events).
Competency description
Performance components
(a) Monitor all observations to check for errors and inconsistencies, correct errors or flag data in
accordance with prescribed procedures and take follow‑up action;
(c) Check observational messages for format and content before issuance and make corrections
if required;
(c) Accuracy requirements for measurements (for example, as specified in the present Guide
and other WMO or ICAO regulatory and guidance materials);
(f) Prescribed contingency plans (for example, data transmission failure, power failure).
Competency description
Perform all observing tasks in a safe and healthy working environment, at all times complying
with occupational safety and health regulations and procedures.
Performance components
(a) Safely handle, store and dispose of hydrogen and the chemicals used for
generating hydrogen;
(b) Safely handle, store and dispose of mercury, and equipment containing mercury;
(c) Safely handle, store and dispose of other toxic or dangerous substances, and equipment
containing these substances (such as wet‑cell batteries);
(e) Safely perform all observing tasks while minimizing exposure to hazardous environmental
conditions (severe weather, lightning, flood, hurricane, fires, and the like);
(f) Safely perform all observing tasks in the presence of safety hazards (working at heights, in
the proximity of microwave radiation, compressed gases, and the like);
(a) Occupational safety and health requirements and procedures (for example, hydrogen,
mercury, chemical, electrical safety and working at height);
(c) Hazard register summarizing all potential hazards and control measures in the workplace to
enhance occupational safety.
ANNEX 5.B. COMPETENCY FRAMEWORK FOR PERSONNEL INSTALLING
AND MAINTAINING INSTRUMENTATION
The provision of instrument installation and maintenance services within an NMHS or related
services might be accomplished by a variety of skilled personnel, including meteorologists,
instrument specialists and technicians, engineers and IT personnel. Personnel in third‑party
organizations (for example, private contractors, communication service providers and
instrument maintenance agents) and other providers might also supply installation and
maintenance services for various meteorological observing instruments.
This annex sets out a competency framework for personnel involved in the installation and
maintenance of meteorological observing instruments,1 but it is not necessary that each person
has the full set of competencies. However, within specific application conditions (see below),
which will be different for each organization, it is expected that any institution providing the
instrument installation and maintenance services will have staff members somewhere within the
organization who together demonstrate all the competencies. The performance components
as well as the knowledge and skill requirements that support the competencies should be
customized based on the particular context of an organization. However, the general criteria and
requirements provided here will apply in most circumstances.
Application conditions
The application of the competency framework will depend on the following circumstances,
which will be different for each organization:
(b) The way in which internal and external personnel are used to provide the instrument
installation and maintenance services;
(c) The available resources and capabilities (financial, human, technological, and facilities), and
organizational structures, policies and procedures;
(e) WMO guidelines, recommendations and procedures for instrument installation and
maintenance services.
3. Diagnose faults
1
In this document, the competency refers to the performance required for effective installation and maintenance of
minor pieces of observing instruments. The competencies for large meteorological observing infrastructures such
as those including radars and wind profilers are covered under observing programme and network management
competencies.
CHAPTER 5. TRAINING OF INSTRUMENT SPECIALISTS 105
Competency description
Install, test and commission meteorological observing instruments and communications systems.
Performance components
(c) Install instruments and communication systems (including simple site preparation);
(d) Coach observing and technical staff in the operation and maintenance of the instruments
(including provision of SOPs), standard operating instructions, system manuals, wiring
diagrams, and the like;
(f) Complete site classification for variable(s) concerned, prepare and submit instrument and
variable metadata to WIGOS via the Observing Systems Capability Analysis and Review
Tool (OSCAR);
(c) Use of meteorological codes to record observations (for example, according to the Manual
on the Global Data-processing and Forecasting System (WMO-No. 485) and the Manual on
Codes (WMO-No. 306), Volumes I.1, I.2, I.3 and II);
(i) Occupation safety and health requirements for instruments and systems.
106 GUIDE TO INSTRUMENTS AND METHODS OF OBSERVATION - VOLUME V
Competency description
Performance components
(a) Schedule and carry out preventive maintenance and site inspection following prescribed
procedures (for example, change wet bulb wick or recorder charts, clean pyranometer
dome or ceilometer window, change anemometer bearings, and carry out preventive
maintenance on more sophisticated pieces of equipment such as radars and AWSs as
specified in the SOPs);
(e) Perform on‑site calibration checks to ensure that instrument performance is within
tolerance, following prescribed procedures;
(f) Provide guidance and refresher training, remotely if necessary, to on‑site staff, to maintain
compliance with prescribed methods of operating the instruments, for making observations
and with procedures for the reduction of observations;
(g) Inspect the exposure of instruments and remove any obstacles nearby if necessary;
(e) Maintenance and site inspection manuals, SOPs, practices and quality
management systems;
2
See also competency 5 in observing programme and network management competencies, Annex 5.D.
3
See also competency 5 in meteorological observations competencies, Annex 5.A.
4
For site inspection tasks, refer to the present Guide, particularly Volume I, Chapter 1, 1.3.5.1 and the present
volume, Chapter 1, 1.10.1; also to the Guide to the Global Observing System (WMO-No. 488), particularly Chapter 3,
3.1.3.8 and 3.1.3.11; and the Manual on the WMO Integrated Global Observing System (WMO-No. 1160), particularly
Chapter 3, 3.4.8.
CHAPTER 5. TRAINING OF INSTRUMENT SPECIALISTS 107
(h) Occupation safety and health requirements for instruments and systems.
Competency description
Performance components
(b) Inspect observational instruments, communications systems, power supply facilities and
auxiliary infrastructure for faults;
(c) Provide guidance, remotely if necessary, to on‑site staff to identify and diagnose
minor faults;
(d) Record all faults and their occurrence time in a maintenance log or metadata repository;
(c) Use of meteorological codes to record observations (for example, according to the Manual
on the Global Data-processing and Forecasting System (WMO‑No. 485) and the Manual on
Codes (WMO-No. 306), Volumes I.1, I.2, I.3 and II);
(h) Occupation safety and health requirements for instruments and systems;
(i) Contingency planning to ensure continuity of observations (for example, in the event of
power, sensor or system failure, backup sensors and communications systems).
108 GUIDE TO INSTRUMENTS AND METHODS OF OBSERVATION - VOLUME V
Competency description
Performance components
(a) Provide guidance, remotely if necessary, to on‑site staff to repair minor faults;
(d) Perform tests after repair to ensure compliance with performance requirements;
(e) Record repair actions taken and time of resuming data acquisition in a maintenance log
or metadata repository.
(c) Use of meteorological codes to record observations (for example, according to the Manual
on the Global Data-processing and Forecasting System (WMO-No. 485) and the Manual on
Codes (WMO-No. 306), Volumes I.1, I.2, I.3 and II);
(j) Occupation safety and health requirements for instruments and systems.
CHAPTER 5. TRAINING OF INSTRUMENT SPECIALISTS 109
Competency description
Perform all tasks in a safe and healthy working environment, at all times complying with
occupational safety and health regulations and procedures.
Performance components
(b) Raise safety awareness among other employees and visitors to the site;
(c) Continuously monitor the workplace for occupational safety and health hazards and correct
or mitigate non‑conformances;
(f) Safely handle, store and dispose of all hazardous chemicals (for example, mercury,
hydrogen and the chemicals used for generating hydrogen, and batteries);
(g) Perform safely in the proximity of electrical hazards, microwave radiation, weather‑related
hazards and when working at heights or in confined spaces;
(b) Safety procedures in handling hazardous materials (for example, mercury, hydrogen and
the chemicals used for generating hydrogen, and batteries);
(c) Safety procedures for electrical hazards, microwave radiation, weather‑related hazards and
when working at heights or in confined spaces;
The provision of instrument calibration services within an NMHS or related services might be
accomplished by a variety of skilled personnel, including meteorologists, instrument specialists,
technicians and engineers. Third‑party organizations (for example, private contractors,
calibration service providers and laboratories) might also provide calibration services for various
meteorological observing instruments.
This annex sets out a competency framework for personnel working in calibration laboratories
and/or providing centralized calibration services for meteorological observing instruments,
but it is not necessary that each person has the full set of competencies. However, within
specific application conditions (see below), which will be different for each organization, it
is expected that any institution providing the instrument calibration services will have staff
members somewhere within the organization who together demonstrate all the competencies.
The performance components as well as the knowledge and skill requirements that support
the competencies should be customized based on the particular context of an organization.
However, the general criteria and requirements provided here will apply in most circumstances.
Application conditions
The application of the competency framework will depend on the following circumstances,
which will be different for each organization:
(b) The way in which internal and external personnel are used to provide the instrument
calibration services;
(c) The available resources and capabilities (financial, human, technological, and facilities),
and organizational structures, policies and procedures;
(e) WMO guidelines, recommendations and procedures for instrument calibration services.
1. Calibrate instruments
1
“Archiving”, in this context, is the function of storing, keeping secure, and ensuring discoverability, accessibility
and retrievability of data and information.
CHAPTER 5. TRAINING OF INSTRUMENT SPECIALISTS 111
Competency description
Execute calibrations in accordance with standard calibration procedures, from item handling to
editing of calibration certificates.
Performance components
(d) The basics of metrology and uncertainty computation, including knowledge of VIM,
SI, measurement standards and traceability, measurement uncertainty and errors, and
calculation of uncertainty using prescribed methods;
Competency description
Performance components
(c) Compare the instrument with standards and evaluate its functionality;
112 GUIDE TO INSTRUMENTS AND METHODS OF OBSERVATION - VOLUME V
(d) The basics of metrology and uncertainty computation, including knowledge of VIM,
SI, measurement standards and traceability, measurement uncertainty and errors, and
calculation of uncertainty using prescribed methods;
Competency description
Develop, prepare, organize and manage the calibration activities of the calibration laboratory.
Performance components
(a) Manage the work of the calibration laboratory, including quality and technical aspects
(covering traceability of standards, uncertainty budget evaluation) in accordance with
ISO/IEC 17025 – General Requirements for the Competence of Testing and Calibration
Laboratories;
(b) Plan and organize the regular calibrations (either internal or external, as required) of
reference standards following SOPs and/or relevant WMO guidance;
(c) Prepare, plan, design, procure the physical infrastructure for calibration activities
(test chambers, standards, fixed point cells, pressure generators, and the like) and the
applications required to conduct calibration activities;
(d) Monitor the quality of the laboratory calibration activities and determine the laboratory’s
applicable calibration and measurement capability (CMC);
(f) Communicate with customers on calibration issues, including explaining the results of
calibrations;
(e) Conduct internal and external audits, and where possible ILCs as recommended by
ISO/IEC 17025.
CHAPTER 5. TRAINING OF INSTRUMENT SPECIALISTS 113
(c) Advanced metrology and uncertainty computation including, in addition to the basics,
detailed knowledge of Guide to the Expression of Uncertainty in Measurement (ISO/IEC, 2008)
or equivalent, and application of the Guide to the Expression of Uncertainty Measurement
framework to measurement uncertainty evaluation;
(e) Quality‑related requirements (for example, ISO 9001, ISO/IEC 17025, good
laboratory practice);
Competency description
Install and maintain the physical infrastructure for calibration activities (test chambers,
standards, fixed‑point cells, pressure generators, and the like) and the applications required to
conduct calibration activities.
Performance components
(a) Install and set up the physical infrastructure for calibration activities, including software;
(b) Test the equipment to ensure its compliance with the requirements;
(f) Manage site environment (air conditioning, secure electric power, and the like).
(a) Laboratory facilities and standards (including software), and their maintenance;
(e) The basics of metrology including knowledge of VIM, SI, measurement standards and
traceability;
Competency description
Develop, assess and maintain SOPs necessary for the achievement of calibrating activities,
including computing calibration uncertainties.
Performance components
(a) Develop SOPs taking into account available laboratory facilities and quality management
requirements;
(b) Advanced metrology and uncertainty computation including, in addition to the basics,
detailed knowledge of Guide to the Expression of Uncertainty in Measurement (ISO/IEC, 2008)
or equivalent, application of the Guide to the Expression of Uncertainty Measurement
framework to measurement uncertainty evaluation, conducting ILCs and determination of
the CMC of the laboratory;
(d) Quality requirements (for example, ISO 9001, ISO/IEC 17025, good laboratory practice);
Competency description
Performance components
(a) Archive calibration activity measurement data and metadata and the associated records;
Knowledge of prescribed practices for managing the data and record archival.
Competency description
Perform all calibration tasks in a safe and healthy working environment, at all times complying
with occupational safety and health regulations and procedures, and security requirements.
Performance components
(a) Safely handle, store and dispose of mercury, and equipment containing mercury;
(b) Safely handle, store and dispose of other toxic or dangerous substances, and equipment
containing these substances (such as wet‑cell batteries);
(d) Safely perform all calibration tasks in the presence of safety hazards;
(e) Ensure the security (access restrictions, and the like) of the calibration laboratory and
instruments under test.
This annex sets out a competency framework for personnel involved in the management of
observing programmes and networks. It is not necessary that each person has the full set of
competencies.1 However, within specific application conditions (see below), which will be
different for each organization, it is expected that any institution managing an observing
programme and network operation will have staff members somewhere within the organization
or external service providers who together demonstrate all the competencies. The performance
components as well as the knowledge and skill requirements that support the competencies
should be customized based on the particular context of an organization. However, the general
criteria and requirements provided here will apply in most circumstances.
In planning and managing the observing programme and network operation, the relevant
regulatory requirements and guiding principles from the Manual on the WMO Integrated Global
Observing System (WMO-No. 1160) should be taken into account (for example, Appendices 2.1
and 2.5). The WMO Rolling Review of Requirements process (https://community.wmo.int/en/
rolling-review-requirements-process-2023-version) in combination with OSCAR (https:/oscar
.wmo.int) should be used so that the capabilities of the observing programme can be reviewed
and improved to meet the relevant data requirements under various WMO application areas.
Application conditions
The application of the competency framework will depend on the following circumstances,
which will be different for each organization:
(b) The way in which internal and external personnel are used to provide the observing
programme and network management services;
(c) The available resources and capabilities (financial, human, technological, and facilities), and
organizational structures, policies and procedures;
(e) WMO guidelines, recommendations and procedures for observing programme and
network management.
2. Procure equipment
1
In the present context, “competency” refers to the performance required for effective management of an observing
programme involving large meteorological observing networks, such as those including radars and wind profilers.
CHAPTER 5. TRAINING OF INSTRUMENT SPECIALISTS 117
Competency description
Performance components
(c) Identify the required observational instrumentation to fill the identified gaps;
(d) Design network topology and structure required to fill the identified gaps, taking into
account the inclusion of external (so‑called third‑party) data sources;
(e) Identify the associated human resources required (quantities and competencies) for the
sustainable operation of the proposed observing programme;
(f) Identify the required supporting infrastructure (for example site, buildings,
communications);
(g) Prepare a fully costed life cycle plan for the sustainable operation of the proposed
observing programme;
(h) Document in detail the proposed observing programme and develop the
implementation plan;
(i) Check that the final observing programme satisfies the original specified requirements
(review and obtain feedback from users);
(j) Develop (or update existing) contingency plan and business continuity plan for the
observing programme.
(a) Users’ requirements for data under various WMO application areas;
(d) Financial planning and management, including knowledge of different financial accounting
models – for example, accrual and cash accounting, asset versus recurrent costing, costs
benefits analysis, and whole‑life costing;
(g) Familiarity with WMO regulations, guidelines and activities (for example, the Guide to
Instruments and Methods of Observation (WMO-No. 8), the Guide to the Global Observing
System (WMO-No. 488), the Manual on the WMO Integrated Global Observing System
(WMO-No. 1160), the Rolling Review of Requirements, OSCAR and CIMO Testbeds);
(h) Familiarity with the Implementation Plan for the Evolution of Global Observing Systems and
any national observing system strategies;
Competency description:
Performance components:
(a) Confirm procurement scope with the planning team, including availability of funds to meet
capital and operational costs;
(b) Conduct market surveys to identify the suitable models of instruments meeting observation
requirements;
(c) Conduct engineering design and/or draw up functional specifications of the instruments to
be procured;
(d) Initiate tender or purchasing processes for equipment and infrastructure (obtain the
necessary approvals) and prepare and issue procurement documents:
– Tender evaluation;
– Purchase recommendation;
– Appoint supplier;
(g) Occupational safety and health requirements for instruments and systems.
Competency description:
Select, acquire and commission observing sites for installation of instruments and
communications systems.
Performance components:
(a) Identify suitable sites for long‑term observations that meet observational requirements
(for example, conduct site survey to ensure representative measurements of the required
variables can be taken to satisfy the data requirements of relevant WMO application areas);
(b) Detailed site planning and site acquisition (ensure reliable power supply and
communications; ascertain best form(s) of communications (satellite, copper cable, optical
fibre, microwave link, General Packet Radio Service, private wire); road access, site exposure,
granting of site lease, acquisition of formal land allocation notification, and the like);
(c) Prepare site or enclosure (for example, civil works: clear and level the site, establish power
and communications; ensure fencing of site and road access);
(d) Provide site plan, layout diagrams of observing equipment, power supply, communication
links, and the like;
(f) Confirm site conditions, for example, flatness of site, earthing conditions (< 10 ohms) for
lightning protection, low electromagnetic wave background for lightning location detector,
quality of power supply, communications bandwidth, roadways and fencing;
(g) Complete the handover of site (for example, obtain site acceptance certificates);
(a) The Guide to Instruments and Methods of Observation (WMO-No. 8) (for example, Volume I,
Chapter 1, in particular 1.3, and Annex 1.D – Siting classification for surface observing
stations on land (WMO/ISO); Annex 1.F – Station exposure description);
(c) ICTs;
Competency description
Install, test and commission major2 components of observing networks (for example, weather
radars, vertical wind profilers).
Performance components
(a) Assemble, test and calibrate network components (for example, instruments,
communications, support systems) before transport to site;
(c) Install network components and carry out user acceptance tests;
(d) Ensure training is conducted to meet user or operational requirements (including SOPs and
instructions, systems manuals, wiring diagrams, and the like);
(e) Complete site classification for variable(s) concerned; prepare and submit instrumentation
metadata to WIGOS via OSCAR;
(b) The observing programme, including existing network components or new components to
be installed in the observing network;
2
This indicates components that comprise a significant investment for an organization and so require a structured
project management approach, as opposed to the implementation of minor pieces of observing infrastructure, the
competencies for which are covered under Instrumentation competencies.
CHAPTER 5. TRAINING OF INSTRUMENT SPECIALISTS 121
Competency description
Performance components
(a) Implement network maintenance (preventive, corrective, adaptive), site inspection and
instrument calibration programmes3 to ensure correct and sustainable functioning of
all equipment;
(b) Develop and employ quality assurance tools (for regular diagnosis of system functions and
parameters) for all instrumentation both in situ and remote sensing;
(c) Develop and maintain a data quality monitoring system (for example, manual and/or
automated data quality control systems) to ensure data traceability and metadata accuracy;
(d) Coordinate with external sources (partners, volunteers and other third‑party sources such
as crowdsourcing) regarding the provision of their data to ensure the quality of data and
homogeneity of the integrated network;
(e) Prepare contingency plans for network operation and data acquisition, including periodic
testing of effectiveness;
(f) Monitor network performance using appropriate tools and schemes, and devise indicators
to measure network performance (for example, data availability, timeliness);
(g) Document all operational procedures (for example, network maintenance, instrument
calibration, data quality control algorithms, contingency plans);
(b) Familiarity with WMO guidelines and regulations on meteorological observations (for
example, the Guide to Instruments and Methods of Observation (WMO-No. 8), the Manual on
the WMO Integrated Global Observing System (WMO-No. 1160) and the WIGOS Framework
Implementation Plan);
3
Including for remote‑sensing equipment. Note, for example, that detailed guidance on maintenance of radars and
wind profilers is given in the Guide to Instruments and Methods of Observation (WMO-No. 8), Volume III, Chapter 7, 7.7,
and Dibbern et al. (2003), Chapter 4, respectively.
122 GUIDE TO INSTRUMENTS AND METHODS OF OBSERVATION - VOLUME V
(e) Asset management standards, for example, ISO 55000 (Asset Management:
Overview, Principles and Terminology) and the Global Forum on Maintenance and
Asset Management;
(f) Occupation safety and health requirements for the observing network.
Competency description
Manage the observing programme (technical, financial and human resources, and the like) to
ensure observing programme requirements are met safely and sustainably.
Performance components
(a) Develop financial and human resource plans and secure the resources that ensure
sustainability of the observing programme;
(b) Regularly evaluate and reassess staff performance and provide ongoing training (in liaison
with the training section if necessary) to ensure maintenance of competency of all staff
involved in the observing programme;
(c) Coordinate with users and, as required, update data requirements of the observing
programme (for example, real‑time observations, NWP applications and climate
monitoring);
(d) Regularly review short‑term and long‑term goals of the observing programme, identify
areas for its continuous improvement (for example, improved standardization, network
optimization and development);
(e) Explore and implement technical solutions to address improvement areas identified taking
into account technological change of instrumentation and data communication methods;
(f) Promote awareness and compliance of all staff with occupational safety and health
requirements.
(a) Financial planning including knowledge of different financial accounting models (for
example, accrual and cash accounting, asset versus recurrent costing, cost–benefit analysis,
and whole‑life costing);
(e) Familiarity with WMO regulations, guidelines and activities (for example, the Technical
Regulations (WMO-No. 49), the Guide to the Global Observing System (WMO-No. 488), the
Manual on the WMO Integrated Global Observing System (WMO-No. 1160) and OSCAR);
Craig, R. L., Ed. Training and Development Handbook: A Guide to Human Resource Development; McGraw‑Hill:
New York, USA, 1987.
Dibbern, J.; Engelbart, D.; Goersdorf, U. et al. Operational Aspects of Wind Profiler Radars
(WMO/TD-No. 1196). Instruments and Observing Methods Report No. 79; World
Meteorological Organization (WMO): Geneva, 2003.
Imai, M. Kaizen: The Key to Japan’s Competitive Success; Random House: New York, USA, 1986.
International Organization for Standardization (ISO). Asset Management – Overview, Principles and
Terminology; ISO 55000:2014, 2014 https://www.iso.org/standard/55088.html.
International Organization for Standardization (ISO). Quality Management Systems – Requirements;
ISO 9001:2015, 2015a https://www.iso.org/standard/62085.html.
International Organization for Standardization (ISO). Quality Management Systems – Fundamentals and
Vocabulary; ISO 9000:2015, 2015b https://www.iso.org/standard/45481.html.
International Organization for Standardization (ISO). Quality Management – Quality of an Organization
– Guidance to Achieve Sustained Success; ISO 9004:2018, 2018a https://www.iso.org/
standard/70397.html.
International Organization for Standardization (ISO). Guidelines for Auditing Management Systems;
ISO 19011:2018, 2018b https://www.iso.org/standard/70017.html.
International Organization for Standardization (ISO). Risk Management – Guidelines; ISO 31000:2018, 2018c
https://www.iso.org/standard/65694.html.
International Organization for Standardization (ISO)/International Electrotechnical Commission (IEC).
Uncertainty of Measurement – Part 3: Guide to the Expression of Uncertainty in Measurement
(GUM:1995) https://www.iso.org/standard/50461.html; ISO/IEC Guide 98‑3:2008, Incl.
Suppl. 1:2008/Cor 1:2009, Suppl. 1:2008, Suppl. 2:2011, Geneva, 2008. (Equivalent to: JCGM,
2008: Evaluation of Measurement Data – Guide to the Expression of Uncertainty in Measurement.
JCGM 100:2008, corrected in 2010, incl. JCGM 101:2008, JCGM 102:2011.)
International Organization for Standardization (ISO). General Requirements for the Competence of Testing
and Calibration Laboratories; ISO 17025:2017, 2017 https://www.iso.org/publication/
PUB100424.html.
Moss, G. The Trainer’s Handbook; Ministry of Agriculture and Fisheries: Wellington, 1987.
Walton, M. The Deming Management Method; Putnam Publishing: New York, USA, 1986.
World Meteorological Organization (WMO). Manual on Codes (WMO-No. 306), Volumes I.1, I.2, I.3
and II. Geneva.
World Meteorological Organization (WMO). Guide to the Global Observing System (WMO-No. 488).
Geneva 2010.
World Meteorological Organization (WMO). Guidelines for Trainers in Meteorological, Hydrological and Climate
Services (WMO-No. 1114). Geneva, 2013.
World Meteorological Organization (WMO). Guide to the Implementation of Education and Training Standards
in Meteorology and Hydrology (WMO-No. 1083), Volume I. Geneva, 2015.
World Meteorological Organization (WMO). Manual on the WMO Integrated Global Observing System
(WMO-No. 1160). Geneva, 2015.
World Meteorological Organization (WMO). Technical Regulations (WMO-No. 49), Volume II. Geneva, 2016.
World Meteorological Organization (WMO). Guide to the Implementation of Quality Management Systems
for National Meteorological and Hydrological Services and Other Relevant Service Providers
(WMO-No. 1100). Geneva, 2017.
World Meteorological Organization (WMO). International Cloud Atlas: Manual on the Observation of Clouds
and Other Meteors (WMO-No. 407). Geneva, 2017.
World Meteorological Organization (WMO). Manual on the Global Data-processing and Forecasting System
(WMO-No. 485). Geneva, 2017.
World Meteorological Organization (WMO). A Compendium of Topics to Support Management Development in
National Meteorological and Hydrological Services, ETR-No. 24; Geneva, 2018.
World Meteorological Organization (WMO). Guide to Competency (WMO-No. 1205). Geneva, 2018.
CHAPTER 5. TRAINING OF INSTRUMENT SPECIALISTS 125
World Meteorological Organization (WMO). Developing Meteorological and Hydrological Services through
WMO Education and Training Opportunities, ETR-No. 25; Geneva, 2020.
World Meteorological Organization (WMO). WMO Knowledge‑sharing Portal. https://community.wmo.int/
activity-areas/imop/knowledge-sharing-portal.
World Meteorological Organization (WMO). WMO Education and Training Programme Moodle Site. https://
etrp.wmo.int/.
For more information, please contact:
wmo.int
JN 221281