0% found this document useful (0 votes)
8 views

algorithm-met-technical-note

algorithm met technical note

Uploaded by

Cong Tran
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

algorithm-met-technical-note

algorithm met technical note

Uploaded by

Cong Tran
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

TECHNICAL NOTE

Algorithm to describe weather conditions at


European airports

ATMAP weather algorithm


(Version 2.3)

Prepared by the Performance Review Unit in consultation


with the ATMAP MET working group

10 May 2011
DOCUMENT IDENTIFICATION SHEET

DOCUMENT DESCRIPTION
Document Title

Technical Note:
Describing the weather impact algorithm developed by the ATMAP project

PROGRAMME REFERENCE INDEX EDITION: EDITION DATE:


FINAL 10 May 2011
SUMMARY

This paper describes Version 2.3 of the weather algorithm that has been developed in consultation with
the MET working group within the framework of the ATMAP project. The algorithm takes into
account four weather classes which could make airport airside and ANS operations more
difficult and complex.

Keywords
ATMAP ANS Weather Airport

Performance Review Unit, EUROCONTROL, 96 Rue de la Fusée,


CONTACT B-1130 Brussels, Belgium. Tel: +32 2 729 3956, e-mail: [email protected]
: http://www.eurocontrol.int/prc

DOCUMENT STATUS AND TYPE

STATUS DISTRIBUTION
Draft  General Public 
Proposed Issue  EUROCONTROL Organisation 
Released Issue  Restricted 

INTERNAL REFERENCE NAME:

1
TABLE OF CONTENTS

1 INTRODUCTION ..................................................................................................................................................... 1
2 FRAMEWORK FOR THE DEVELOPMENT OF THE WEATHER IMPACT ALGORITHM ......................... 1
2.1 WEATHER AND ANS/AIRPORT PERFORMANCE ......................................................................................................... 1
2.2 OBJECTIVES........................................................................................................................................................... 2
2.3 WHAT SHOULD THE ALGORITHM MEASURE? ............................................................................................................ 2
2.4 BASIC ASSUMPTIONS .............................................................................................................................................. 3
3 DESCRIPTION OF THE WEATHER ALGORITHM .......................................................................................... 4
3.1 MAIN PURPOSES OF THE ALGORITHM ...................................................................................................................... 4
3.2 HOW THE WEATHER ALGORITHM WAS DEVELOPED ................................................................................................... 4
3.3 BASIC ELEMENTS OF THE ALGORITHM ..................................................................................................................... 5
3.3.1 Weather phenomena .................................................................................................................................... 5
3.3.2 Weather class ................................................................................................................................................ 5
3.3.3 Severity code ................................................................................................................................................. 5
3.3.4 Coefficient ...................................................................................................................................................... 5
3.4 THE ALGORITHM COMPUTATION AT A GLANCE ......................................................................................................... 5
3.4.1 European level analysis................................................................................................................................ 5
3.4.2 Local airport analysis.................................................................................................................................... 7
3.5 WEATHER CLASS: CEILING & VISIBILITY ................................................................................................................. 8
3.10 WEATHER CLASS: WIND ....................................................................................................................................... 10
3.11 WEATHER CLASS: PRECIPITATIONS ....................................................................................................................... 12
3.12 WEATHER CLASS: FREEZING CONDITIONS ............................................................................................................. 13
3.13 WEATHER CLASS: DANGEROUS PHENOMENA .......................................................................................................... 15
3.14 THINGS TO BEAR IN MIND WHEN USING THE WEATHER ALGORITHM........................................................................ 17
4 DATA USED FOR ASSESSING WEATHER CONDITIONS.......................................................................... 18
4.1 METAR DATA ..................................................................................................................................................... 18
4.2 METAR LIMITATIONS .......................................................................................................................................... 18
5 ALGORITHM VALIDATION ACTIVITIES....................................................................................................... 19
5.1 WEATHER CLASSES AND SEVERITY CODES VALIDATIONS ........................................................................................ 19
5.2 COEFFICIENTS VALIDATION .................................................................................................................................. 20
6 EVOLUTION OF THE WEATHER ALGORITHM............................................................................................. 20
7 ATMAP GROUP MEMBERS................................................................................................................................. 21
8 FOR HELP WITH THE ALGORITHM ................................................................................................................ 21
9 REFERENCE DOCUMENTS ................................................................................................................................. 22
10 PL/SQL DESCRIPTION OF THE ALGORITHM......................................................................................... 22
ANNEX I: THRESHOLDS, SEVERITY CODES AND COEFFICIENTS DETERMINATION: PROCESS
DESCRIPTION................................................................................................................................................................ 23
ANNEX II: PL/SQL CODE....................................................................................................................................... 26

2
1 Introduction
The purpose of this document is to present an open-source weather algorithm for implementation
and local use by A-CDM groups and airport communities.

It describes the weather conditions at airports in the post-analysis phase after the day of operations.
The algorithm quantifies the weather conditions which have to be considered when measuring ANS
performance at airports. The data used for doing the quantification are METAR messages.

The algorithm will also be used by the EUROCONTROL Performance Review Unit (PRU) for
performance analyses at European level, and also as an interactive tool to exchange information
with airport communities about the impact of weather conditions on ANS and airport performance.

The algorithm has been prepared by PRU in consultation with the ATMAP Group at the request of
the Performance Review Commission (PRC) to support a uniform measurement of airport airside
performance across European airports.

The document is structured around the following sections:


 Section 1 Introduction
 Section 2 Framework for the development of the weather impact algorithm
 Section 3 Description of the weather algorithm
 Section 4 Data used to assess the weather conditions: METARs
 Section 5 Brief description of the validation of the algorithm
 Section 6 The evolution of the weather algorithm: the way ahead
 Section 7: Contacts for the ATMAP MET working group members
 Section 8 Bibliography used for the weather algorithm elaboration.
 Section 9: Detailed description of the algorithm (PL/SQL)

Definitions and acronyms used in this report are identical to ICAO SARPs, documents and circulars,
otherwise indicated otherwise.

2 Framework for the development of the weather impact algorithm


2.1 Weather and ANS/airport performance

ANS and airside airport performances are dependent on weather conditions, and they must be
considered when seeking to put in perspective the level of performance achieved at a given airport.

In Collaborative Decision Making (CDM) Airports, the analysis of weather is fundamental to


enhancing ANS and airport airside performances. It is necessary to cross-analyse weather data with
flights, capacity and performance data to improve ANS and airport performance.

In Performance Review, when analysing a given year or a given season, it is necessary to


distinguish weather a performance variation is related to a genuine improvement of airport/ANS
processes or to a variation of weather conditions from a year to another.

Differences between airports which operate under similar conditions cannot be excluded. The level
of mitigation of adverse weather conditions (infrastructures, equipment) could make the difference
from one airport to another. Furthermore, it could be expected that an airport could perform better in
some days of operations than in other days with similar weather conditions. The post-analysis could
reveal practices which would be worth applying in a systematic way in all days of operations for
improving performance.

1
ANS is directly accountable to put in place mitigation measures for a number of weather
phenomena, more particularly visibility and wind. Furthermore, ANS is accountable for minimising
the loss of capacity utilisation under convective weather such as thunderstorms and cumulonimbus.
The mitigation of the impact of precipitations and freezing conditions is under the responsibility of
airport maintenance and de-icing teams. However, in such weather circumstances, ANS is
accountable for well executing its functions as established in airport plans (e.g. airport winter plan).

2.2 Objectives
Some airports have developed tools for cross-analysing weather, traffic, capacity and performance.
Examples are: Amsterdam (the Capacity Prognosis tool), Brussels (the weather impact analysis tool
developed by Marc Matthys), “Meteo Technical Committee” inside the A-CDM at CDG. The ATMAP
weather algorithm draws upon their experiences.

The ATMAP Group would like all major European airports to include weather data in their
performance analyses at local level. As already stated, the PRU also intends to use the weather
algorithm for performance analyses and as an interactive tool to activate an informed dialogue with
airport communities about the impact of weather conditions on ANS and airport performance.

The weather algorithm has the following objectives:


 Measure weather conditions consistently across European airports;
 Provide an objective and consolidated measure of the intensity and duration of weather
phenomena which could make ANS and airside airport operations more complex or difficult;
 Classify days of operations in two categories (good and bad weather) for high level
performance analyses.

When classifying days of operations into “good weather” and “bad weather” days, the main intention
is to extract the “good weather” days from a given set of days in a year or an IATA season. This will
enable ANS performance to be evaluated when the impact of weather is absent or marginal. The
second intention is to investigate in “bad weather” days how the weather phenomena have impacted
performance. Bad weather days could be classified by categories (freezing, wind, poor visibility,
etc.) and then analysed.

This approach for separating bad and good weather days was chosen because airspace users
expect ANS/airport performance to deliver constant and predictable performance in the majority of
days of operations in a year or IATA season. Having an ANS/airport which has excellent
performance in “good weather” days, but which suffers significant capacity drops in “bad weather”
days or other marginal conditions is not a desirable situation for airspace users.

The algorithm helps to recognise the efforts made by those airports which are subject to severe
weather conditions for a significant number of days (e.g. Scandinavian airports in winter). Usually,
these airports maintain normal operating conditions but at considerable cost (e.g. provision of snow
ploughs etc.).

2.3 What should the algorithm measure?


The ATMAP framework classifies the factors affecting performance in three high level categories:
1. Traffic demand and its characteristics
2. factors external to ANS (mainly noise emissions and weather)
3. Airport resources and its configuration in relation to their status and the status of weather
conditions

2
Airport traffic

Saturation

Bomb Industrial RWY


alert actions incursion
Two TWYs Radar ILS CAT III
closed failure failure
One TWY One RWY Two RWYs
closed closed closed

Low traffic

Weather conditions
Good Bad

Figure 1: the 3-D approach to analysing ANS performance at airports.

The results in terms of performance depend on how the three high-level categories interact with
each other.

The algorithm provides a methodology to measure weather conditions only. The strategy for
combining the weather assessment with the other performance affecting factors is described in
other ATMAP documents (www.eurocontrol.int/prc).

It is recognised that airports have different levels of equipment and infrastructure to respond to
adverse weather. For instance Rome (FCO) will budget less for dealing with snow and freezing
conditions than Helsinki (HEL). Under similar weather conditions (e.g. light snow for three hours),
Helsinki (HEL) and Rome (FCO) will experience different impacts on performance. In conclusion a
day of light snow in HEL will be an ordinary day of operations, while in FCO the level of punctuality
will drop and cancellations will increase. The ATMAP Group has found a practical solution to
consider these differences (see later in par. 3.4.2) while using the same weather algorithm for
measuring weather conditions across all European airports.

2.4 Basic assumptions


When defining good weather conditions, we assume an aircraft population used by commercial
aviation whose tailwind limits are 10 KTS1 or higher and demonstrable crosswind is 20 KTS or
higher under dry runway conditions, and the availability of the following airport infrastructures:
 The length and width of runways2 are adapted to the usage of commercial aviation without
imposing particular constraints on flight operations;
 Pavement physical characteristics and its maintenance ensure a good wheel-tyre friction
during light or moderate precipitations (excluding snow);
 There are basic CNS infrastructures and visual aids for performing a precision approach
when RVR or visibility is not less than 800 metres and the ceiling is not lower than 300 Ft.

1
KTS – knots (a measure of speed expressed in Nautical Miles per hour).
2
TORA, TODA, ASDA, LDA, etc.

3
 There are basic de-icing infrastructures which can efficiently cope with light water vapour
phenomena (excluding snow) at temperatures not less than minus 15 degrees.

Disruptive weather conditions are not identified, but are included in the class “bad weather”.
However, the scoring method which is contained in the weather algorithm allows the degree of
severity of adverse weather conditions in “bad weather” days to be measured. The reasons for this
choice are listed hereunder.

 The adjective “disruption” applies to “airport operating conditions” which are the results of the
combination between weather conditions and the airport infrastructures available to mitigate a
given weather phenomena.

 Similarly the adjective “disruption” applies to aircraft operations for a given aircraft type. For
instance a 25 KTS crosswind will disrupt operations for a Cessna 550, but not for the majority of
aircraft used by commercial aviation (Boeing 737, MD80, Learjet 25, etc.).

 The wording “disrupted weather conditions” could be applied to a rare number of weather
conditions that may be meaningful from a safety perspective, but whose occurrence will not
have statistical significance.

3 Description of the weather algorithm


3.1 Main purposes of the algorithm
The default application of the algorithm provides the status of the weather conditions in a given day
of operations, typically in the period between 0600 and 21593.

The assessment of weather conditions in a given day of operations is uniformly measured across
European airports (same algorithm, same data set used to feed the algorithm).

The assessment provides two major outputs:


1. An assessment per each weather category which could make airport operations more complex
and difficult (ceiling &visibility, wind, freezing conditions, etc.)
2. An overall assessment of the day of operations (either bad or good weather day)

3.2 How the weather algorithm was developed


The weather algorithm was developed by experts in the following domains: Meteorology; Air Traffic
Control; Airport operators; De-icing companies; Airlines; Mathematicians. They used as a basis the
initial version of the weather algorithm which was published in the ATM Airport Performance
(ATMAP) Framework in 2009.

The methodology was developed in three phases:


1. Phase 1: an initial hypothesis was elaborated between experts and it was passed to the
mathematician as “expert input”.
2. Phase 2: the mathematician validated the hypothesis in accordance with the process described
in Annex (par. 0)
3. Phase 3: the results of the validation were submitted to the ATMAP Group for final consultation.

This was an iterative process which took place many times throughout the project.

A list of validation activities conducted by the ATMAP MET group is included in par.5.

3
Local time.

4
3.3 Basic elements of the algorithm

3.3.1 Weather phenomena


In this report the term “weather phenomena” is a single meteorological element which may affect the
safety of aircraft operations. For instance, true temperature, visibility, ceiling, wind etc. are all
weather phenomena.

The codes used to describe weather phenomena in this document are taken from ICAO Annex 3
“Meteorological Service for International Air Navigation”.

3.3.2 Weather class


A weather class is a group of weather phenomena which impact on airport performance. For
instance “freezing conditions” include up to four weather phenomena: true temperature, dew-point,
water vapour precipitation or obscuration.

3.3.3 Severity code


A severity code is a discrete number which ranks a weather class status from the best case (code 1)
to the worst case (4 or 5). A given severity code defines the lower and upper bounds of situation
with meteorological facts description (e.g. temperature, wind speed or phenomena such as rain or
snow).

3.3.4 Coefficient
A coefficient is a discrete number (from 0 to 30) which assigns a score to the severity of a given
severity code. It was introduced because the severity code of itself was not sufficient to describe the
non-linear behaviour of some meteorological phenomena. For instance the shift from moderate to
heavy snow is much more significant for airport operations than the shifting from moderate to heavy
rain.

3.4 The algorithm computation at a glance

3.4.1 European level analysis

This methodology will be use for the attribution of a daily Weather status by PRU.

Step 1: In each weather observation of the day (i.e. the METAR which is issued each 30 minutes),
each weather class is assessed as it follows (see Figure 2):
 A severity code is assigned
 A coefficient is associated to each severity code

Example for Step 1


The METAR reports Wind Calm, RVR (visibility) 600 mt, Base of Clouds 800 Ft, Moderate Snow,
Temperature -2

This METAR would score:


2 points (coefficient) for visibility and 7 points (coefficients) for snow = total of 9 points.

5
Visibility and Ceilling

Wind

METAR Precipitations

Freezing conditions

Dangerous phenomena

Figure 2: METAR and weather classes.

Step 2: In each weather observation of the day (i.e. the METAR which is issued each 30 minutes),
the coefficient assigned to each weather class is summed up. A score is applied to all observations
(i.e. METARs) recorded for a given day of operations.

Example for Step 2


Let’s take METARs between 0600 and 0800 LT = 4 METARs
Score (coefficient) METAR 1 = 9
Score (coefficient) METAR 2 = 7
Score (coefficient) METAR 3 = 5
Score (coefficient) METAR 4 = 0 TOTAL of the 4 METARs = 21

Result layer
Visibility & Ceilling Score Rvisibility = Avg Score

Wind Score Rwind = Avg Score


Weather
Algorithm Precipitations Score Rprecipitation = Avg Score

Freezing conditions Score Rfreez cond = Avg Score

Dangerous phenomena Score Rdang phen = Avg Score

Figure 3: Overview of the weather algorithm computation.

Step 3: An average is computed with all scores of the day.

Example for Step 3


Let’s take METARs from 0600 until 2159. In 16 hours 32 METARs were issued. Let’s assume that
they cumulate a total score of 50 points.
50 divided by 32 = 1.56
Another way of achieving the same results but in a different way is illustrated in Figure 3. The
method proposed there has the advantage to identify which weather class was more severe on a
given day of operations. The method illustrated in Figure 3 is more interesting when studying how to
improve performance in adverse weather conditions.

Step 4: Result layer: Status of the day.

6
The computed average for the day is compared to the threshold for a “bad weather” day is (R) ≥ 1.5.
This value has been determined through an iterative process.

An example of daily results for EBBR is presented Figure 4. The average score of each weather
class is represented from mid-April 2009 to February 2011. Each peak with a score above or equal
to 1.5 is considered as a bad weather day.

Visibility & Ceilling Precipitations Freezing Conditions Wind Dangerous Phenomena


8
7
Average daily score

6
5

4
3
2

1
0
15/04/2009

15/05/2009

15/06/2009

15/07/2009

15/08/2009

15/09/2009

15/10/2009

15/11/2009

15/12/2009

15/01/2010

15/02/2010

15/03/2010

15/04/2010

15/05/2010

15/06/2010

15/07/2010

15/08/2010

15/09/2010

15/10/2010

15/11/2010

15/12/2010

15/01/2011
Figure 4 Evolution of scores at EBBR between April 2009 and February 2011.

3.4.2 Local airport analysis


The ATMAP Group encourages airports to implement the weather algorithm and to discuss its
output in multi-disciplinary groups, such as as A-CDM groups, local performance groups, airport
scheduling committee etc.

After a minimum period of observation (at least an IATA winter season), it may be observed that the
European algorithm does not perfectly reflect the local situation. In such cases it is recommended to
modify the threshold score for the given weather class (see Figure 5).

Should the modification of the score threshold be insufficient, the local airport stakeholders may
wish to agree upon a modification to the algorithm. Any such modification should be communicated
to PRU together with supporting data analyses. The PRU will consider integrating such
modifications into the weather algorithm at European level.

3.4.2.1 Cause(s) of bad weather days


In the weather algorithm at European level each score above zero contributes to determining the
“bad weather” days. However, at local level, stakeholders may count the same score differently. For
instance Rome FCO may put 0.3 as the threshold for freezing conditions, applied to count “bad
weather” days, while Helsinki HEL may put 1.0 for the same weather class.

7
Good or bad
Default European
weather days
score for bad weather
days >1.5
∑ Rw > 1.5

Result layer
Rvisibility > α
Rwind > β
Rprecipitation > γ Score per
Rfreezing cond > δ each weather class.
Rdangerous ph > ε

Local airport scores for


bad weather days in
specific weather
classes.
Figure 5: Bad weather days: default European score and local airport score

3.5 Weather class: ceiling & visibility

Introduction

The ANS performance at airports (runway throughput, delays) varies in accordance with the level of
visibility and its mitigation. Airport performance under poor visibility depends on the instrument
approach system and on other systems (e.g. radar surface movement radar). The drop in
performance during poor visibility has an impact on arrival flights, but it may also have an impact on
departure flights (runway occupancy times and taxi out durations). All else being equal, the drop in
performance during poor visibility will vary, depending on the technical systems being used (ILS,
MLS, Satellite aid, multi-lateral surface movement radar, etc.) and on the approved ATC procedures
being used (particularly separation minima).

Description of the METAR parsing

With regard to the weather class “ceiling & visibility”, the parsing methodology for METARs delivers
the following parsed fields:
 visibility values: two values for general visibility (VIS) and two values for Runway Visual
Ranges (RVRs point A), and
 3 groups of cloud values (CLD_base, CLD_type and CLD_cover).

Overall presentation of the weather class

Visibility & Visibility RVR Cloud base/Vertical visibility Octas


Ceiling [meter] [meter] [x 100 feet] Coef
Code
Code 1 > 1500 and > 1450
0
between 800 & 1450 or (between 300 & 1450 and OVC, BKN)
3.6

Code 2 between 550 & 750 or (between 200 & 250 and OVC, BKN) 2
3.7

8
Code 3 between 350 & 500 or (between 100 & 150 and OVC, BKN) 4
3.8
Code 4  325 or (≤ 50 and OVC, BKN) 5
3.9

Figure 6 Weather class Ceiling and Visibility characteristics.

Description of the expert inputs for assigning severity codes (1st column to the left in Figure 6)

The ceiling & visibility severity codes have been developed on the basis of the definition of
"instrument runway" reported in ICAO Annex 14 (Airport). For the purposes of this weather
methodology:
 severity code 1 corresponds to "Non-precision approach runway" values”;
 severity code 2 corresponds to "precision approach runways, Category I;
 severity code 3 corresponds to "precision approach runways, Category II;
 severity code 4 corresponds to "precision approach runways, Category III.

It is interesting to distinguish between when “general visibility” is used in the algorithm and when it is
replaced by RVR in the algorithm:
 When in all RWYs with RVR the point A, (corresponding to touch down) is ≥ 1500 m, than
general visibility is used in the algorithm.
 When the point A of the RVR of at least one RWY is below 1500m, then the RVR is used at
the place of general visibility. When RVR is installed in more than an active runway, then
the worst point A is taken for the algorithm.
The lowest value of VIS & RVR is selected for computing the coefficient value of the weather class
in the algorithm. For Cloud cover, type and base, the algorithm takes into account the cloud group
with the more extended ceiling. If two or more groups of clouds have the same ceiling extension, the
lowest one is selected for computing the coefficient of the weather class.
Based on 1 millions of METAR collected up to now for 30 European airports: 66% of CLD_base1
contains values, CLD_base2 contains values in 37% of cases, CLD_base3 contains values for 10%
of cases and only 0.2% for CLD_base4.

Description of the expert inputs for assigning coefficient (last column to the right in Figure 6)

The coefficient varies from 0 assigned to Code 1 until 5 assigned to code 4.


The logic in assigning coefficient depends on the level of technological sophistication and/or level of
investment which would be necessary to mitigate at the best a given level of ceiling and visibility.

In visibility conditions Code 1, it is possible to conduct non-precision approaches and the landing
interval between aircraft is the closest possible one; that is why the assigned coefficient is zero. In
visibility conditions Code 2, precision approaches CAT I should be conducted, landing intervals
remain the same, but the reduction of visibility may increase the probabilities of some uncertainties
in operations (e.g. an aircraft might miss the first available taxiway and continue being in the runway
until the next exit). Coefficient two is assigned.

In visibility conditions code 3, CAT II approaches should be conducted, Low Visibility Operations are
activated and the landing intervals between aircraft will increase in case ILS is used for landing.
Operations become more complex than in Code 1 and Code 2, therefore coefficient 4 is assigned.
In visibility conditions code 4, CAT III approaches should be conducted. Low Visibility Operations
are activated, landing intervals between aircraft could further increase where ILS is used for landing,
but also departure traffic will suffer limitations. Operations become slightly more complex than in
Code 3, therefore coefficient 5 is assigned

An airport community might decide to introduce MLS in order to upgrade significantly the response
to poor visibility and low ceiling (during Code 3 and Code 4 weather). However this solution is very
costly for airport stakeholders, and currently only London LHR has implemented an MLS.

9
Limits of the weather class

Current limit of the ceiling & visibility class is that it assumes the availability of a Runway Visual
Range (RVR) system at least in one runway. Would the ceiling & visibility class be used to assess
ceiling & visibility at airports non-equipped with RVR, the severity codes should be adapted.

3.10 Weather class: wind

Introduction

Wind has multiple effects on ANS and airport performance. Strong head-wind and cross-wind could
reduce the runway throughput and increase delays; take-off and landing over tail-wind intensity
depending on aircraft type has to be avoided for safety reasons. The aircraft sensitivity to strong
winds increases in cases of wet or contaminated runways.

Description of the METAR parsing

The METAR parsing supplies the wind and gust speed.

Overall presentation of the weather class

Wind Code Wind speed Wind


gust
[kt] coeff
Code 1 ≤ 15 0
Code 2 between 16 and 20 1
+1
Code 3 between 21 and 30 2
Code 4 > 30 4

Figure 7 Weather class Wind characteristics.

Description of the expert inputs for assigning severity codes (1st column to the left in Figure 7)

The severity codes are based on commercial aircraft whose tailwind limits are 10 KTS or higher and
where demonstrable crosswind is 20 KTS or higher under dry runway conditions.

The severity codes have been formulated on the basis of the following considerations:
 Airworthiness limits per type of aircraft as reported in “Safety aspects tailwind operations”;
doc NLR-TP-2001-003; National Aerospace Laboratory (Netherlands)
 Airworthiness limits per type of aircraft as reported in “Safety aspects of aircraft operations in
crosswind”; NLR-TP-2001-217; National Aerospace Laboratory (Netherlands)
 Cross analyses between wind speed, punctuality, ATFM and ASMA delays at all ATMAP
airports
 Comparison of results between the option to describe the wind class only with speed or with
speed and wind direction.

Only wind speed was used to develop the wind weather class for the following reasons, which were
observed during the validation:
 airport operations at the majority of European airports run smoothly while the speed wind
(including gusting) remains below 15 knots (severity code 1).

10
 wind direction does not appear to have any significant effect on the majority of European
airports. In any case, when wind speed increases above 15 KTS, airport operations become
more difficult irrespectively of the relative position of wind (cross or head wind). Tail wind
does not usually add excessive complexity to operations until the runway direction in use
could be changed.
 At the same level of wind speed, the presence of gusting increases the difficulty of
operations. Therefore the wind speed considered is the gusting value instead of the average
value, when there are gusts.
 There is a minority of airports whose operations could be impacted when wind comes from
specific directions and the wind speed is between 10Kts and 15 Kts. However days when
this happens are relatively limited.

It was finally concluded to retain the wind speed of 15 KTS as the boundary between “good” and
“bad” wind conditions. This conclusion may be revisited as more flight data with landing and taking
off runways is processed.

The combination of a wet, ice, contaminated runway with strong winds increases the impact on ANS
performance (throughput and delays). The tests and trials show that the severity codes used in the
next weather class (precipitations) take this situation into account. The runway conditions are further
discussed in the next paragraph.

Description of the expert inputs for assigning coefficient (last column to the right in Figure 7)

The coefficients assigned to each severity code range from zero to four. These coefficients are
relatively low compared to other weather phenomena (e.g. dangerous phenomena), because wind
does not disrupt operations unless it is very high (e.g. 40 KTS of cross wind component). These
events are very rare and they are generally accompanied by other phenomena (cumulonimbus,
tornadoes, etc.).

The coefficient values of the weather class “wind” are attributed in function of the wind speed:
 coefficient zero is assigned to Code 1 when wind is below 15 KTS.
 coefficient one is assigned to Code 2 (wind between 16 and 20 KTS) as the wind speed
has an impact on aircraft ground speed and consequently on the arrival throughput.
 coefficient two is assigned to Code 3 as the wind speed (between 21 and 30 KTS) has
an impact on aircraft ground speed, but also some aircraft reach the crosswind
airworthiness limits (e.g. C550).
 coefficient four is assigned to Code 4 (wind speed >30kts) as the wind starts having a
severe impact on airport operations (higher impact on ground speed and more and more
aircraft reach the crosswind airworthiness limits).

Finally, in consideration that gusting wind makes operations more difficult, the coefficient is
increased by one in the presence of gusts.

Limits of the weather class

No other limitations of the weather class were observed during the validation phase, other than the
ones described above.

11
3.11 Weather class: precipitations

Introduction

Precipitations4 may have an impact on the level of runway friction, which in turn could influence the
aircraft runway occupancy times which have a significant impact on ANS performance at airports. If
the runway friction is poor, if the runway is contaminated or if ice is present, then the runway
occupancy times will dramatically increase, runway throughput will reduce and delays will increase.
The status of runway friction is influenced by the level of precipitations, by the quality of the runway
construction and by the quality of its maintenance.

In developing the weather algorithm, we have assumed well built and well maintained runways
which ensure a good wheel-tyre friction and good braking action when the runways are wet during
continuous light or moderate precipitations, excluding snow and freezing rain. This is usually the
case at airports used by commercial aviation.

Currently the weather algorithm infers the runway friction from the level and type of observed
precipitations. The more water vapour precipitations become intense (e.g. heavy rain) or compact
(snow, ice pellets, etc.), the more it will be difficult to maintain good runway friction levels.5 The
severity codes and coefficients of the weather class “precipitations” represent this difficulty.

Description of the METAR parsing

The parsing result gives the content of ‘Present Weather’.

Overall presentation of the weather class

Precipitations Type of precipitations


Coef
Severity Code
Code 1 no precipitation 0
Code 2 RA, UP, DZ, IC 1
Code 3 -SN, SG, +RA 2
Code 4 FZxx, SN, + SN 3

Figure 8 Weather class Precipitation characteristics


Description of the expert inputs for assigning severity codes (1st column to the left in Figure 8)

Code 1 refers to no precipitation. Code 2 implies light precipitations excluding snow. Code 3
includes heavy rain and light snow which might have a potential impact on runway friction, but a well
constructed runway and/or a standard maintenance are usually sufficient to avoid a significant
degradation in the runway friction. Code 4 includes precipitations which need heavy equipment and
sophisticated procedures to keep the runway clean and the quality of runway friction maintained.

4
Definition of “precipitation” as in ICAO Annex 3.
5 The Annex 14 ICAO clearly prescribes that the surface of a paved runway shall be so constructed as to provide good
friction characteristics when the runway is wet. Drainage characteristics of the runway shall be adapted to the expected
level of precipitations. Further, the Annex 14 prescribes that the surface of a paved runway shall be maintained in a
condition so as to provide good friction characteristics and low rolling resistance. Snow, slush, ice, standing water, mud,
dust, sand, oil, rubber deposits and other contaminants shall be removed as rapidly and completely as possible to
minimize accumulation.

12
Description of the expert inputs for assigning coefficient (last column to the right in Figure 8)

The assignment of the coefficients was done in close relationship with the next weather class
“freezing conditions”. In fact, water vapour precipitations are used two times in the algorithm: in the
weather class “precipitation” and in the weather class “freezing conditions”. This allows taking into
account that the management of runway friction is more difficult in low temperatures due to the risk
of ice formation and accumulation of snow.
To give a couple of examples:
 a METAR where light snow is reported at temperature zero would count coefficient 2 from
the weather class precipitation and 3 from the weather class “freezing conditions” for a total
of 5 points.
 a METAR where heavy rain is reported at temperature +9 would only count coefficient 2 from
the weather class precipitation and 0 from the weather class “freezing conditions” for a total
of 3 points.
The choice to infer runway friction from precipitations and temperatures allows measuring the
external weather conditions in an objective way across European airports 6.

Limits of the weather class

No particular limit was identified for this weather class.

3.12 Weather class: freezing conditions

Introduction

The ATMAP Group defined “Freezing conditions” as those conditions in which the outside air
temperature is below +3°C (37.4°F) and the meteorological observations report visible moisture in
any form (such as fog with visibility below 1.5Km, rain, snow, sleet or ice crystals).

The main impact on ANS performance can be attributed to a deterioration of the runway friction (see
also previous weather class precipitation) and to a number of aircraft which has to be de-iced before
take off. When de-icing takes place in “remote de-icing pads” (RDPs) after off block, the total
duration of taxi out times is composed of three sub-durations:
 The duration from the off block until the RDP
 The duration of de-icing operations
 The duration from leaving the RDP until take off
A high number of aircraft which need de-icing and the complexity to maintain standard values of
runway friction under low temperature and snowfalls could be contributing factors for low
performance if the weather situation is not properly mitigated.

Description of the METAR parsing

The parsing results provide the content of ‘Present Weather’, the temperature and the dew point.

6
The direct measure of the runway friction is possible and data exist and may be efficiently collected.
However the runway friction measures together the weather phenomena (e.g. snow) and its mitigation
strategy (runway construction and its maintenance).

13
Overall presentation of the weather class

Freezing Temperature True


conditions [°C] Temperature
Moisture presence Coef
Code & Dew Point
[°C]
T>3 NO visible moisture
Code 1 0
T>3 visible moisture
Code 2 -15 < T  3 NO visible moisture TT - DP ≥ 3 0
Code 3 -15 < T  3 DZ, IC, RA, UP, FG, GR,
TT - DP < 3 1
GS, PL
Code 4 -15 < T  3 -SN, SG, +RA, RASN, BR 3
-15 < T  3 SN, +SN, SHSN, FZxxx
Code 5 4
T  -15 visible moisture

Figure 9 Weather class Freezing conditions.

In Figure 9, TT means Temperature in °C and DP is Dew Point in °C.

Description of the expert inputs for assigning severity codes (1st column to the left in Figure 9)

The severity codes in the weather class “freezing conditions” are presented in Figure 9 and
described hereunder:
 Freezing conditions do not exist when the temperature is above +3°c (Code 1)
 Freezing conditions are light when the temperature is below +3°, there are light
precipitations or the dew point is close to the true temperature. (Code 2)
 Freezing conditions are moderate when the temperature is below +3°c and the snow is light
(Code 3)
 Freezing conditions are severe when the temperature is below +3°c and the snow fall is
moderate or severe, but also when there is any type of precipitation and the temperature is
below -15°. (Code 4)
The logic of severity codes considers that the complexity of aircraft de-icing, ice control and snow
removal is mainly driven by the level of precipitations rather than temperature, unless this goes
below -15°c.

Description of the expert inputs for assigning coefficient (last column to the right in Figure 9)

Coefficient zero is assigned to Code 1 and Code 2, because the number of de-iced aircraft and the
work load on the airport maintenance team will be relatively limited.

Coefficient one is assigned to Code 3 because the de-icing operations will be relatively short and
they will not affect all aircraft as well as the airport maintenance team will only operate some limited
ice control and snow removal activities.
The ATMAP Group noted that Code 3 tends to be treated differently by airports in Northern and
Southern Europe. If a Southern airport reports Code 3 for many hours, then it is likely that delays
due to weather start rising. A Northern airport most likely will continue operating normally.

Coefficient three is assigned to Code 4 as the vast majority of aircraft would need de-icing and the
workload increases on the airport maintenance team.

14
Coefficient four is assigned to Code 5 as freezing conditions become very harsh and difficult to
mitigate, even for Scandinavian airports.

Limits of the weather class

This weather class has reached a high level of maturity given the amount of validation which was
conducted for setting the different parameters.

3.13 Weather class: dangerous phenomena

Introduction

This weather class collects all observed present weather phenomena which are dangerous for the
safety of aircraft operations. Observed present weather phenomena shall be reported in terms of
type and characteristics and qualified with respect to intensity or proximity to the aerodrome (up to
16 Km from the airport7), as appropriate.

The principal dangerous weather phenomena are the following:

 Cumulonimbus (CB) with or without precipitation


 Tower Cumulus (TCU) are not dangerous phenomena, but they could quickly transform in
cumulonimbus; furthermore the radar-meteo on board the aircraft cannot distinguish
cumulonimbus from tower cumulus.
 Thunder with or without precipitation (TS)8
 Ice Pellets (PL),Small Hail and/or Snow Pellets (GS); Hail (GR)
 Funnel cloud (tornado or waterspout) (FC)
 Squall (SQ)
 Volcanic Ash (VA)
 Dust-storm (DS), Sandstorm (SS), Sand (SA), Dust/sand whirls (PO)

Given the high number of variables which characterize the impact of these phenomena on airport
operations (position, duration, intensity, extension of the area affected, etc.) it is nearly impossible to
predict their real impact. Each case differs from each other. It is ANS’ responsibility to find the best
way to forecast the evolution of these dangerous weather phenomena and to maximize the use of
whatever portion of the airspace and of the airport which remains free from these phenomena.

Description of the METAR parsing

With regard to the weather class “dangerous phenomena”, the parsing methodology for METARs
delivers 3 groups of cloud values (CLD_base, CLD_type and CLD_cover), but we do not use the
fourth group as it is almost never populated with values.

For a given METAR, we consider 4 different aspects of dangerous phenomena:


 Presence of CB with no precipitation;
 Presence of TCU with no precipitation;
 Presence of CB or TCU with shower precipitations ( ±SHxx);
 Presence of dangerous precipitations or other dangerous events (FC,DS, GR, TS, etc.).

The METAR parsing provides also ‘Present Weather’ used for dangerous phenomena detection.

7
See ICAO Annex 3 Aeronautical meteorology
8
When thunder is heard or lightning is detected at the aerodrome during the 10-minute period preceding the
time of observation but no precipitation is observed at the aerodrome, the abbreviation “TS” should be used
without qualification.

15
Overall presentation of the weather class

Cloud Cover CB TCU


Code 1 - 0 0
Code 2 FEW 4 3
Code 3 SCT 6 5
Code 4 BKN 10 8
Code 5 OVC 12 10

Figure 10 : Coefficient values for CB and TCU presence without precipitations or other events.

CB TCU -SHxx SHxx & +SHxx


Code 1 - - 0 0
Code 2 - FEW 4 6
Code 3 FEW SCT + 8 12
Code 4 SCT BKN 10 15
Code 5 BKN OVC 12 20

Code 6 OVC - 18 24

Figure 11 Coefficient values for CB and/or TCU presence with shower precipitations (SHxx).

Type of
Dangerous Coefficient
phenomena
Code 1 - 0
Code 2 GS 18
FC, DS, SS, VA, SA,
Code 3 SA, GR, PL, TS
24
Code 4 +TS 30
Figure 12 Weather class Dangerous phenomena.

Description of the expert inputs for assigning severity codes (1st column to the left in Figure 10, in
Figure 11 and in Figure 12)

The severity codes in Figure 10 are based on cloud coverage. The wider the coverage, the more
likely it is that the presence of CB or TCU risks disrupting aircraft operations. The severity codes in
Figure 11 recognise that the dangerous phenomena are more intense in the presence of shower
precipitations. They combine cloud coverage and the intensity of shower precipitations (light,
moderate or heavy). The severity codes in Figure 12 refer to dangerous precipitations and thunder.
All of these phenomena are very severe for aircraft operations, but the most disruptive is when
heavy thunder activity takes place.

Description of the expert inputs for assigning coefficient (last column to the right in Figure 10, in
Figure 11 and in Figure 12)

The coefficient attributed to the presence of CB without precipitation will depend on the 4 levels of
cloud coverage as presented in Figure 10. For TCU, the coefficients are lower than CB as

16
presented in Figure 10. The presence of shower precipitation (±SHxx) increases the coefficient
value attributed to the METAR (see Figure 11).

When a dangerous precipitation (e.g. hail-GS) or thunder (TS) or tornado, etc. are reported in the
METAR, this further increases the coefficient of the METAR (see Error! Reference source not
found.Figure 12).
The score for a given METAR is the maximum which was recorded having regard to the three
Figures above (Figure 10 + Figure 11 + Figure 12Error! Reference source not found.) . Here are
some examples:
 In METAR 1 there is SCT CB and GS, therefore the coefficient for the class dangerous
phenomena is 18
 In METAR 2 there is only SCT CB and BKN TCU therefore the coefficient is 8
 In METAR 3 there is BKN CB, SHRA and GS, therefore the coefficient is 20

Limits of the weather class

The position of the dangerous phenomena relatively to the runway system is an important driver for
determining the real impact of dangerous phenomena on airport operations. However this is not
detected by the weather algorithm, in its current form.

3.14 Things to bear in mind when using the weather algorithm

The weather algorithm is not a tool to predict performance at airports. It is recognised that at similar
weather conditions, an airport could respond better in some days than other days.

The weather algorithm is calibrated for assessing weather conditions at airports where the
commercial aviation represents the majority of traffic. Airports used by a significant portion of
recreational aviation may be more sensitive to weather than those airports whose traffic is largely
composed of commercial aviation.

The weather algorithm does not take into account the time of the day when the weather
phenomenon which may affect ANS performance takes place. It is clear that when adverse weather
hits periods when traffic demand is at or above the level of capacity, the potential impact is more
severe. The purpose of the weather algorithm is to assess and describe the weather conditions
independently from traffic congestion, but the results from the weather algorithm shall be combined
to traffic congestion levels in a successive step of the performance analysis. An example is how
CDG CDM team has integrated the weather algorithm in the monthly post analysis process.

The weather algorithm cannot detect situations when operations are slowed down by wind aloft, but
the ground wind speed is calm. This is because such weather observations are not collected at
European level on systematic basis and nobody uses such data in real operations (e.g. wind aloft is
not an input in approach sequence tools). It is hoped that SESAR innovations will solve this
problem.

The weather algorithm cannot detect convective weather and dangerous phenomena which take
place farther than 9 nautical miles (16Km) from the airport.

17
4 Data used for assessing weather conditions
4.1 METAR data

The ATMAP Group used METAR data to provide an assessment of the weather situation at airports
in past IATA seasons. It selected the METAR data set because:

 METAR is a message containing all safety critical meteorological observations for flight
operations in a given airport and nearby airspace (up to 16 km9 from the airport). This
information is distributed and immediately available to relevant ATC positions.
 METAR data is considered to be at least at the level of quality described in ICAO Annex 3,
ATTACHMENT A – “Operationally desirable accuracy of measurement or observation”.
 METAR messages are distributed worldwide and publicly available.

Other sources of meteorological observations which were assessed by the ATMAP Group are:

 SPECI messages;
 SYNOP messages;
 Local meteorological products.

As described in ICAO Annex 3, SPECI messages may be used to report significant changes when
occurring between the emissions of two consecutive METARs. The ATMAP Group could also
process SPECI messages as they are issued with the same modalities as METARs.

SPECIs are rarely used at major European airports. Instead, METARs are issued every 30 minutes,
as is the case in all European airports.

SYNOP messages are used to report synoptic surface observations. SYNOP reports current and
past observations up to three hours before. These messages are used by Brussels for performance
monitoring purposes. Brussels confirmed that SYNOP messages are consistent with METARs. In
summary, the two messages are basically equivalent.

As described in ICAO Annex 3, METAR data may be complemented by local MET products, which
are only distributed to local airport stakeholders. There is no reason to believe that these two
weather products report contradictory information. Such circumstance would represent a safety
concern.

Local MET products could be accessed through ATIS, VOLMET or with local meteorological
displays. Local MET products vary from airport to airport, depending on the need of each airport
community (for instance Helsinki may have specific observation for snow, while Munich may have
additional information on wind gusting). In summary, local MET products are not usable to perform
an assessment of the weather situation at European level.

4.2 METAR limitations

The METAR data collection started in April 2009. There were difficulties with loss of METARs. This
was investigated by the EUROCONTROL PRISME. In January 2011, the problem was solved and
data collection is more complete since then. A new parser is planned to be realised.

A limitation of METAR messages is that weather phenomena beyond 16 Km from the airport are not
reported. This is particularly relevant for wind aloft between 1500 and 5000 ft and for dangerous
weather phenomena (thunderstorm, lightning, CB, etc.). This limitation may be more or less

9
See ICAO Annex 3 par. 4.4.2.6

18
significant depending on how many weather phenomena take place outside the airport.

The limitation related to wind aloft cannot be easily overcome. In Europe there is no meteorological
observation which is made available to ATC for wind aloft in the band 1500 and 5000 ft 10

The limitation related to dangerous weather phenomena which takes place more than16 Km from
the airport could be overcome through other meteorological observations which could complement
METARs (mainly radar meteorological data).

The CFMU maintains operational logs which describe the reasons for each ATFM regulation.
Weather events are recorded in these logs. The operational logs are a good external datasource to
maintain under control the quality of the weather assessment made using METAR data.

5 Algorithm validation activities


The ATMAP Group validated the algorithm through two principal types of activities:

1. validation of weather classes and severity codes

2. validation of coefficients and of the computation method to get to the status (are weather
conditions good or bad?)

5.1 Weather classes and severity codes validations

The validation strategy was adapted to the amount of uncertainties which were checked. For
instance, validating the class “ceiling and visibility” was straightforward as the different situations are
well coded in ICAO documentation. In contrast, validating the class “freezing conditions” was more
complicated due to the number of choices that had to be made.

Hereunder you can find a brief description of the validation activity conducted for each weather
class. Should you require more details about a specific activity, please contact PRU
([email protected]).
Weather class Validation activity
All The main important choice was the definition of severity codes. After having reviewed
local data of participating airports, it was decided that the severity code 1 (good
visibility and ceiling) should have been quite large and it should have included up to
CAT I ceiling and visibility conditions.
Another choice was about the RVR to consider when more than one RVR if available
in METARs. It was finally decided to get the worst value.
Wind The first attempt was to consider both direction and intensity. However the validation
demonstrated that the weather deterioration is mainly driven by wind intensity and only
in few specific cases is driven by wind direction. Therefore, only intensity was used in
defining severity codes, but with a little adaptation for the few airports which cannot
use the optimal number of runways under predetermined wind conditions for having
achieved the airworthiness limits of the majority of the aircraft population.
Another choice was made about gusting winds. In case of gusting the intensity

10
With regard to wind aloft, no consolidated meteorological observations are applied and passed to ATC in Europe. The
information of wind aloft would be necessary in a height band between 5.000 feet and 1500 feet in order to organize an
efficient arrival sequence. With the current level of technology and current level of ICAO technical specifications, it would
be possible to organize a systematic and frequent data collection of AIREP / AMDAR data to feed the approach sequence
tools. At the moment this is not done. This means that ATCOs rely on wind-spot information received by pilots during voice
communications and that approach sequence tools provide unreliable information when strong winds affect downwind,
base and final approach legs. ATCOs therefore need to organize the arrival sequence without supporting tools. The
consequence is that wind aloft between 5.000 feet and 1500 feet have a significant impact on performance, but there is
not an objective record of these events simply because meteorological data are not collected and organized.

19
reported is the sum between the average value plus the gusting value. This is in fact
what it is usually done in flight operations.
Precipitation The first issue was to identify the type of precipitations which would have a negligible
impact on runway friction. After having interviewed some local experts, the group
came to the conclusion that in well constructed and maintained runways light and
moderate rain precipitations have a negligible impact on runway friction.
The second issue was to identify a strategy for dealing with snow phenomena which
have both an impact on de-icing activities and on runway friction. The decision was to
address the issue considering snow both in precipitation and in freezing conditions
weather classes.
Freezing conditions The initial draft algorithm was revised many times due to the extensive data analyses
and important feedback from experts. In summary the de-icing data from CDG, FRA,
HEL, and PRG was analysed to get to the final version of this weather class as well as
de-icing expert advices were received from HEL and CDG.
Dangerous After some analyses, it was concluded that usually dangerous phenomena are short in
phenomena duration but they have high impact on operations. The main difficulty was to list
carefully all dangerous phenomena and to logically group them in few severity codes.
However, the main issue was to assign the right coefficients (see next paragraph).

5.2 Coefficients validation

The main reason to introduce coefficients in the algorithm is the non-linear relation when shifting
from a severity code to another. For instance heavy snow has a much higher impact of moderate
snow on airport operations. Once coefficients were introduced, they were adjusted and refined using
expertise feedbacks and correlation with indicators such as punctuality. However, there was a
careful attention to avoid situations where an adjustment of a coefficient was not justified by a real
change in weather conditions.

The computation method was designed to reflect that the weather condition in a given day is the
sum of the different conditions during the day which were observed every half-hour. In fact, the first
attempt was to demonstrate that an adverse weather phenomenon would have had more impact if it
took place in a consecutive long time period, rather than in non-consecutive short periods. The
validation demonstrated that the overall weather condition in a given day is not related to
consecutive hours of adverse weather, but more to the total sum of hours affected by adverse
weather.

6 Evolution of the weather algorithm


The PRU, in consultation with the ATMAP Group, considers that no further possible evolution is
possible until:

 the algorithm is used widely in European airports;

 European airport communities evaluate the results of the algorithm;

 The PRU and the local airport communities analyse a useful amount of data.

European airport communities are invited to provide feedback on the weather algorithm, as follows:

 Provide to PRU data analyses which compares results which are obtained through data
sources available at European level (e.g. comparisons between SYNOPs and METARs)

 Give advice about any meteorological product which could provide wind aloft between 1500
and 5000 ft and convective weather observations which could be used to measure weather
conditions at airports’ surroundings in a consistent manner across Europe.

20
 Provide any statistical analyses which could support the evolution of the weather algorithm,
including analyses which confirm the validity of the choices made by the ATMAP Group.

Such feedback should be sent to the PRU by mid-2013 at the latest ([email protected]).

The PRU will review all inputs received in order to propose any further modification to the algorithm
and/or modifications to the associated data flow.

7 ATMAP Group members


The members of the ATMAP Group, which developed this weather algorithm, were as follows:

 BARTELS, Jens (Munich Airport);


 BECKMANN, Matthias (FRAPORT);
 BRETON, Herve (DSNA);
 CARRILLON Jean-Yves (Meteo France);
 CASTELAIN, Franck (Air France);
 CERMAK, Ladislav (ANS CZ);
 COTE Heloise (Eurocontrol);
 DENTCHEV Milen (Eurocontrol)
 FIEAND Juha (NorthPort);
 GREENHALGH, Kathryn (BAA-LHR);
 HART, Dennis (Eurocontrol);
 KALITA, Hanna (PANSA);
 KRACMAR, Jan (ANS CZ);
 KOLAROV Anthony (Bulatsa)
 LINAIS, Rodolphe (AdP);
 MATTHYS, Marc (Belgocontrol);
 McBEAN, Ian (NATS);
 MONCEAU, Gilbert (Meteo France);
 PRETI Francesco (Eurocontrol);
 RIKKINEN, Markku (Finavia);
 RIVOISY, François-Xavier (AdP);
 SPOONER, Robert (NATS);
 SUORTO, Timo (Finavia);

8 For help with the algorithm


Should you require any assistance in using the weather algorithm please contact the PRU
([email protected]).

21
9 Reference documents
 ICAO Annex 3 “Meteorological Services for International Air Navigation”, in particular
Appendix 3 "Technical specifications related to Meteorological observations and reports."
 ICAO Annex 14 “Aerodromes”
 Aerodrome Design Manual (ICAO Doc 9157)
 Airport Services Manual (ICAO Doc 9137)
 Manual of Aircraft Ground De-icing/Anti-icing Operations (CIAO Doc 9640)
 Definition of freezing conditions in the Swedish AIC circular B160/2002
 Definition of freezing conditions in the "Fly Fokker" magazine "Safe Cold Weather Operation"
issued in November 2009
 FAA Advisory Circular (AC No: 150/5200-30C) "Airport Winter Safety and Operations" ;
Dated: 12/09/08 (09 December 2008)
 Winter Operation at Zurich Airport (Revision 8 dated 1.11.2009)

10 PL/SQL description of the algorithm


The first version of the algorithm has been developed in PL/SQL on Oracle Toad. This query
language used for databases permit also the use of procedural programming.

Once a METAR is parsed and insert into the METAR database, it is available for processing.

Each METAR is treated individually by a function specially design for each weather class. The result
is stored in a table that contains basic information about the METAR and all the results from the
previous functions:
 METAR id number;
 airport;
 date, time;
 functions result.

The second processing built a table that contains basic information of each day and the final results:
 airport;
 date;
 number of METAR provided during 16 hours analysed;
 average value of each weather class results;
 average value of the sum of each weather class results;
 main cause of bad weather with a minimum of R ≥ 0.5;
 status of the day;
 relevant KPIs computed at day level (optional).

Parsed Processed Daily


METAR METAR results

Figure 13 Weather algorithm: databases flowchart.

22
Annex I: Thresholds, severity codes and coefficients determination:
process description

Introduction

In order to estimate the values for each coefficient, the threshold for determining the day status and
the lower and upper bounds of a severity code, an iterative process have been implemented.

This iterative process was put in place progressively with time and has been completely applied
since October 2010.

23
Stakeholder ATMAP Reference Validation
Stakeholder ATMAP Reference Validation
local results MET meeting document process
local results MET meeting document process
Expertise
input
Hypothesis
Hypothesis

Can Request
Can
the algorithm Request
no
the algorithm
implement this rejected
Hypothesis implement rejected
hypothesisthis
analysis hypothesis
?
?
METAR
yes METAR
analysis
analysis
n=1
n=1
Threshold/
Severity Hypothesis :
Hypothesis :
Code/ e.g. coefficient value c
e.g. coefficient value c
Coefficient METAR
METAR c=c+1
value analysis c=c+1
analysis
Hypothesis
n=n+1
n=n+1
Number
of Increasing
bad weather no Increasing
number of
numberdaysof
bad weather
days bad weather
? days
?
analysis
yes

Is
Is
Correlation no
Airports Correlation
between
between
Algorithm results
results identical
Algorithm
& results decreasing
n>1
n>1
&
Airports results
analysis Airports
increasingresults
yes
increasing
?
?
yes

c=c-1
Performance variation > 25% c=c-1
Performance Performance
variation on
variation
Bad weatheron
analysis Bad weather
Days no
Days c > ci
c? > ci
Variation  25% ?
yes

Acceptance ATMAP MET


Modification ATMAP MET METAR
of the Modification group METAR
applied group analysis
modification applied consultation analysis
consultation

Figure 14 presents a flowchart illustrating the iterative process to assign a code or a coefficient in
the ATMAP MET algorithm. Each major step is described in the current section. The example
presented is for the upgrading of a given coefficient, c, where the expected effect is an increase of
the number of bad weather days and a better correlation with airports local results. Note that the
variable n represents the number of incremental steps.

After each test, METAR are analysed individually in order to identify the changes or the unchanged
coefficient values (see step 1 of the algorithm weather computation described above).

24
Expert inputs)

A request may come from different sources: stakeholders, ATMAP MET members, etc.

Each request for modification of the algorithm must be analysed to ensure a positive improvement
of the algorithm. A range of data can proposed to replace the current values and completed by a
description of expected effects on results (number of bad weather days increasing or decreasing, or
a better correlation with airport local results, etc.).

Before modifying or assigning a value, each request must agree with the basic assumptions11 of the
algorithm:
 Identical weather phenomena classes, severity codes, coefficients and thresholds are
applied for all European airports.
 METAR analysis is independent of traffic and infrastructure effects.
 The grouping of weather class reflects the main weather issues for ANS/Airport operations.
 Severity of weather aspects increase with severity codes higher number.
 For a given season, the number of bad weather day cannot be null.
 For a given season, based on the average of a homogeneous sample of European airports,
the % of bad weather days must be maximum 20%.
 As 32 METARs between 0600 & 2200 are evaluated:
o A weather event during the morning has the same weight as in the evening.
o A minimum of 20 METARs by day by airport are required for applying the weather
algorithm in a given day.

Threshold/ Severity code/Coefficients value hypothesis (Figure 14)

Once the first analysis has been provided, a hypothesis on the value to assign is achieved. By
example, for severity code 3 of wind, the coefficient, c = 2, could be upgraded to 3. This upgrade
might facilitate the detection of the phenomenon by the current algorithm. In an opposite way, the
coefficient could be downgraded to 1 due to the large impact on the results.

Based on this hypothesis, the algorithm is temporary modified with the new value and gives new
results.

11
The assumptions are derived from the framework for the development of the algorithm as indicated by the
ATMAP Weather working group (see Chapter 2).

25
Stakeholder ATMAP Reference Validation
Stakeholder ATMAP Reference Validation
local results MET meeting document process
local results MET meeting document process
Expertise
input
Hypothesis
Hypothesis

Can Request
Can
the algorithm Request
no
the algorithm
implement this rejected
Hypothesis implement rejected
hypothesisthis
analysis hypothesis
?
?
METAR
yes METAR
analysis
analysis
n=1
n=1
Threshold/
Severity Hypothesis :
Hypothesis :
Code/ e.g. coefficient value c
e.g. coefficient value c
Coefficient METAR
METAR c=c+1
value analysis c=c+1
analysis
Hypothesis
n=n+1
n=n+1
Number
of Increasing
bad weather no Increasing
number of
numberdaysof
bad weather
days bad weather
? days
?
analysis
yes

Is
Is
Correlation no
Airports Correlation
between
between
Algorithm results
results identical
Algorithm
& results decreasing
n>1
n>1
&
Airports results
analysis Airports
increasingresults
yes
increasing
?
?
yes

c=c-1
Performance variation > 25% c=c-1
Performance Performance
variation on
variation
Bad weatheron
analysis Bad weather
Days no
Days c > ci
c? > ci
Variation  25% ?
yes

Acceptance ATMAP MET


Modification ATMAP MET METAR
of the Modification group METAR
applied group analysis
modification applied consultation analysis
consultation

Figure 14 Iterative process for Severity code bounds, coefficient and threshold values assignation.

26
Number of bad weather day’s analysis (Figure 14)

With this test, the number of bad weather days is compared with the value of bad weather days from
the previous version of the algorithm. If the number of bad weather days does not increase a new
step is required. By opposition, an increase of the number of bad weather days leads to the second
test.

Airports results analysis

Since the beginning of the project, several ATMAP MET members have provided local data to
validate the algorithm. This information is compared directly (if results are comparable) or indirectly
with the algorithm results.

As the second test of the iterative process, if the results from the comparison are identical, a new
incremental step will follow. If the correlation between results decreases at the first step (n = 1), the
request is rejected. With the flowchart, other possibilities are also applied.

If there is an increasing in the correlation between the results comparison, the next test can be
applied.

Performance Analysis

This final test is optional and its decisions are a light weight in the algorithm. To agree with a change
of the algorithm the KPIs used must not present a modification too large. That why the bound for
rejection is set to 25%.

PRU uses several KPIs to evaluate ANS Performance at Airports. One of these can be correlated
with the algorithm results: taxi-out time, ASMA additional time, punctuality, throughput, ATFM arrival
delay due to weather and IATA pre-departure delay codes 71, 75, 76. Depending on the weather
class, one or a few KPIs are used. An average value of kpis is compared between good and bad
weather days.

Acceptance of the modification

Generally, only a few steps are used to assign a final value or to reject the request.

Once every test was completed, the modification was submitted to the ATMAP Group members for
an expert judgement view on the results.

27
ANNEX II: PL/SQL code

METAR data
insert into METAR_INFO_24
select
wm.airport,
get_local_time(wm.timevalid, wm.airport) as localtime,
get_seasonyy(wm.timevalid) as season,
dm2.DIR, dm2.SPD, dm2.SPD_GUST, dm2.DIR_MIN, dm2.DIR_MAX,
dm2.VIS1, dm2.VIS_DIR1, dm2.VIS2, dm2.VIS_DIR2,
dm2.RVR1, dm2.RVR_TXT1, dm2.RVR2, dm2.RVR_TXT2,
dm2.PRESENT_WEATHER,
dm2.CLD_COVER1, dm2.CLD_BASE1, dm2.CLD_TYPE1,
dm2.CLD_COVER2, dm2.CLD_BASE2, dm2.CLD_TYPE2,
dm2.CLD_COVER3, dm2.CLD_BASE3, dm2.CLD_TYPE3,
dm2.CLD_COVER4, dm2.CLD_BASE4, dm2.CLD_TYPE4,
dm2.VV, dm2.T, dm2.TD, dm2.QNH
from
(
select
airport,
timevalid,
max(dm.id) as id
from
opmet.DATA_METAR dm,
opmet.MESSAGES_METAR mm1
where
dm.id = mm1.id
group by airport, timevalid
) wm,
opmet.DATA_METAR dm2
where
wm.id = dm2.id

FUNCTIONS

Visibility & ceiling

28
CREATE OR REPLACE FUNCTION PRUTEST.GET_VISI_CEILING_CODE_V_2_3
(VIS1_in integer, VIS2_in integer,
RVR1_in integer, RVR2_in integer,
CLD_COVER1_in varchar2,CLD_COVER2_in varchar2, CLD_COVER3_in varchar2,
CLD_BASE1_in integer,CLD_BASE2_in integer,CLD_BASE3_in integer)
RETURN INTEGER
IS
cte_vis integer;
cte_cld_cover varchar2(3);
cte_cld_base integer;
cte_rvr integer;
answer integer;
BEGIN
if VIS1_in > VIS2_in then cte_vis := VIS2_in;
elsif VIS1_in <= VIS2_in then cte_vis := VIS1_in;
end if;
if RVR1_in > RVR2_in then cte_RVR := RVR2_in;
elsif RVR1_in <= RVR2_in then cte_RVR := RVR1_in;
end if;
if CLD_BASE1_in > CLD_BASE2_in and CLD_BASE1_in > CLD_BASE3_in then cte_cld_BASE:= CLD_BASE1_in;
elsif CLD_BASE2_in > CLD_BASE1_in and CLD_BASE2_in > CLD_BASE3_in then cte_cld_BASE:= CLD_BASE2_in;
elsif CLD_BASE3_in > CLD_BASE1_in and CLD_BASE3_in > CLD_BASE2_in then cte_cld_BASE:= CLD_BASE3_in;
else cte_cld_BASE := null;
end if;

cte_cld_base := 100*least(CLD_BASE1_in,CLD_BASE2_in,CLD_BASE3_in);
if CLD_COVER1_in in ('BKN', 'OVC') then cte_cld_cover := 1;
elsif CLD_COVER2_in in ('BKN', 'OVC') then cte_cld_cover := 1;
elsif CLD_COVER3_in in ('BKN', 'OVC') then cte_cld_cover := 1;
else cte_cld_cover := null;
end if;
-- visibility is useless in computation;

if
((cte_rvr <= 325) or (cte_cld_cover = 1 and cte_CLD_BASE <= 50))
then answer := 5;
elsif
((cte_rvr between 350 and 500) or (cte_cld_cover = 1 and cte_CLD_BASE between 100 and 150))
then answer := 4;
elsif

29
((cte_rvr between 550 and 750) or (cte_cld_cover = 1 and cte_CLD_BASE between 200 and 250))
then answer := 2;
else answer := 0;
end if;

RETURN answer;
EXCEPTION
WHEN NO_DATA_FOUND THEN
NULL;
WHEN OTHERS THEN
-- Consider logging the error and then re-raise
RAISE;
END Get_Visi_Ceiling_code_V_2_3;
/

Precipitations
CREATE OR REPLACE FUNCTION PRUTEST.get_precipitations_code_V_2_3
(precipitations_in varchar2)
RETURN INTEGER IS
answer integer;
BEGIN

answer := null;
if
instr(precipitations_in,'FZ')>0 then answer := 3;
elsif (instr(precipitations_in,'SN')>0 and instr(precipitations_in,'-')>0) then answer := 2;
elsif instr(precipitations_in,'SN')>0 then answer := 3;
elsif instr(precipitations_in,'SG')>0 then answer := 2;
elsif instr(precipitations_in,'+RA')>0 then answer := 2;
elsif instr(precipitations_in,'+SHRA')>0 then answer := 2;
elsif (instr(precipitations_in,'RA')>0 and instr(precipitations_in,'-')>0) then answer := 0;
elsif instr(precipitations_in,'IC')>0 then answer := 1;
elsif instr(precipitations_in,'RA')>0 then answer := 1;
elsif instr(precipitations_in,'UP')>0 then answer := 1;
elsif instr(precipitations_in,'DZ')>0 then answer := 1;
else answer := 0;
end if;
RETURN answer;
END get_precipitations_code_V_2_3;
/

Freezing contions
CREATE OR REPLACE FUNCTION PRUTEST.GET_FZ_CONDITIONS_CODE_V_2_3
(T_in INTEGER,TD_in INTEGER, PRESENT_WEATHER_in VARCHAR2)

30
RETURN INTEGER
IS
cte_visible_moisture INTEGER;
cte_temperature_dew INTEGER;
answer INTEGER;

BEGIN
cte_visible_moisture := 0;
answer := null;
if
instr(PRESENT_WEATHER_in, 'FZRA')>0 then cte_visible_moisture := 5;
elsif instr(PRESENT_WEATHER_in, '+RA')>0 then cte_visible_moisture := 4;
elsif instr(PRESENT_WEATHER_in, 'SG')>0 then cte_visible_moisture := 4;
elsif instr(PRESENT_WEATHER_in, 'RASN')>0 then cte_visible_moisture := 4;
elsif (instr(PRESENT_WEATHER_in, 'SN')>0 and instr(PRESENT_WEATHER_in, '-')>0) then cte_visible_moisture := 4;
elsif instr(PRESENT_WEATHER_in, 'SN')>0 then cte_visible_moisture := 5;
elsif instr(PRESENT_WEATHER_in, 'BR')>0 then cte_visible_moisture := 4;
elsif instr(PRESENT_WEATHER_in, 'RA')>0 then cte_visible_moisture := 3;
elsif instr(PRESENT_WEATHER_in, 'PL')>0 then cte_visible_moisture := 3;
elsif instr(PRESENT_WEATHER_in, 'IC')>0 then cte_visible_moisture := 3;
elsif instr(PRESENT_WEATHER_in, 'GR')>0 then cte_visible_moisture := 3;
elsif instr(PRESENT_WEATHER_in, 'GS')>0 then cte_visible_moisture := 3;
elsif instr(PRESENT_WEATHER_in, 'UP')>0 then cte_visible_moisture := 3;
elsif instr(PRESENT_WEATHER_in, 'FG')>0 then cte_visible_moisture := 3;
elsif instr(PRESENT_WEATHER_in, 'DZ')>0 then cte_visible_moisture := 3;
elsif PRESENT_WEATHER_IN is not null then cte_visible_moisture := 0;
else cte_visible_moisture := null;
end if;
cte_temperature_dew := t_in - TD_in;
if t_in <= 3 and cte_visible_moisture = 5 then answer := 4;
elsif t_in <-15 and (cte_visible_moisture is not null) then answer := 4;
elsif t_in <= 3 and cte_visible_moisture = 4 then answer := 3;
elsif t_in <= 3 and (cte_visible_moisture = 3 or cte_temperature_dew < 3) then answer := 1;
elsif t_in <= 3 and cte_visible_moisture is null then answer := 0;
elsif t_in > 3 and cte_visible_moisture > 0 then answer := 0;
elsif t_in > 3 and (cte_visible_moisture is null or cte_temperature_dew >= 3) then answer := 0;
else answer := 0;
end if;
RETURN answer;
END get_fz_conditions_code_V_2_3;
/

Wind
CREATE OR REPLACE FUNCTION PRUTEST.get_wind_intensity_code_v_2_3
(

31
DIR_in INTEGER,
SPD_in INTEGER,
SPD_GUST_in INTEGER
)
RETURN INTEGER
IS
wind integer;
answer integer;
BEGIN
wind := spd_in;
if wind <= 15 then answer := 0;
elsif wind > 15 and wind <= 20 then answer := 1;
elsif wind > 20 and wind <= 30 then answer := 2;
elsif wind > 30 then answer := 3;
else answer := null;
end if;
if
spd_gust_in is not null then answer := answer + 1;
else answer := answer;
end if;
RETURN answer;
END get_wind_intensity_code_v_2_3;
/

Dangerous phenomena

CREATE OR REPLACE FUNCTION PRUTEST.GET_CONVECT_WEATHER_CODE_v_2_3


(PRESENT_WEATHER_IN VARCHAR2,
CLD_COVER_in1 varchar2, CLD_TYPE_in1 varchar2,
CLD_COVER_in2 varchar2, CLD_TYPE_in2 varchar2,
CLD_COVER_in3 varchar2, CLD_TYPE_in3 varchar2)
RETURN INTEGER
IS
dangerous_phenomena integer;
cb integer;
tcu integer;
ts_hint integer;
answer INTEGER;
BEGIN
if instr(PRESENT_WEATHER_IN, 'FC')>0 then dangerous_phenomena:= 24;
elsif instr(PRESENT_WEATHER_IN, 'DS')>0 then dangerous_phenomena:= 24;
elsif instr(PRESENT_WEATHER_IN, 'SS')>0 then dangerous_phenomena:= 24;
elsif instr(PRESENT_WEATHER_IN, 'VA')>0 then dangerous_phenomena:= 24;
elsif instr(PRESENT_WEATHER_IN, 'SA')>0 then dangerous_phenomena:= 24;

32
elsif instr(PRESENT_WEATHER_IN, 'SQ')>0 then dangerous_phenomena:= 24;
elsif instr(PRESENT_WEATHER_IN, 'GS')>0 then dangerous_phenomena:= 18;
elsif instr(PRESENT_WEATHER_IN, 'GR')>0 then dangerous_phenomena:= 24;
elsif instr(PRESENT_WEATHER_IN, 'PL')>0 then dangerous_phenomena:= 24;
elsif instr(PRESENT_WEATHER_IN, 'TS')>0 and instr(PRESENT_WEATHER_IN, '+')>0
then dangerous_phenomena:= 30;
elsif instr(PRESENT_WEATHER_IN, 'TS')>0 then dangerous_phenomena:= 24;
else dangerous_phenomena:= 0;
end if;
if (CLD_COVER_in1 in ('OVC') and CLD_TYPE_in1 in ('CB')) then cb:= 12;
elsif (CLD_COVER_in2 in ('OVC') and CLD_TYPE_in2 in ('CB')) then cb:= 12;
elsif (CLD_COVER_in3 in ('OVC') and CLD_TYPE_in3 in ('CB')) then cb:= 12;
elsif (CLD_COVER_in1 in ('BKN') and CLD_TYPE_in1 in ('CB')) then cb:= 10;
elsif (CLD_COVER_in2 in ('BKN') and CLD_TYPE_in2 in ('CB')) then cb:= 10;
elsif (CLD_COVER_in3 in ('BKN') and CLD_TYPE_in3 in ('CB')) then cb:= 10;
elsif (CLD_COVER_in1 in ('SCT') and CLD_TYPE_in1 in ('CB')) then cb:= 6;
elsif (CLD_COVER_in2 in ('SCT') and CLD_TYPE_in2 in ('CB')) then cb:= 6;
elsif (CLD_COVER_in3 in ('SCT') and CLD_TYPE_in3 in ('CB')) then cb:= 6;
elsif (CLD_COVER_in1 in ('FEW') and CLD_TYPE_in1 in ('CB')) then cb:= 4;
elsif (CLD_COVER_in2 in ('FEW') and CLD_TYPE_in2 in ('CB')) then cb:= 4;
elsif (CLD_COVER_in3 in ('FEW') and CLD_TYPE_in3 in ('CB')) then cb:= 4;
else cb:= 0;
end if;
if (CLD_COVER_in1 in ('OVC') and CLD_TYPE_in1 in ('TCU')) then tcu:= 10;
elsif (CLD_COVER_in2 in ('OVC') and CLD_TYPE_in2 in ('TCU')) then tcu:= 10;
elsif (CLD_COVER_in3 in ('OVC') and CLD_TYPE_in3 in ('TCU')) then tcu:= 10;
elsif (CLD_COVER_in1 in ('BKN') and CLD_TYPE_in1 in ('TCU')) then tcu:= 8;
elsif (CLD_COVER_in2 in ('BKN') and CLD_TYPE_in2 in ('TCU')) then tcu:= 8;
elsif (CLD_COVER_in3 in ('BKN') and CLD_TYPE_in3 in ('TCU')) then tcu:= 8;
elsif (CLD_COVER_in1 in ('SCT') and CLD_TYPE_in1 in ('TCU')) then tcu:= 5;
elsif (CLD_COVER_in2 in ('SCT') and CLD_TYPE_in2 in ('TCU')) then tcu:= 5;
elsif (CLD_COVER_in3 in ('SCT') and CLD_TYPE_in3 in ('TCU')) then tcu:= 5;
elsif (CLD_COVER_in1 in ('FEW') and CLD_TYPE_in1 in ('TCU')) then tcu:= 3;
elsif (CLD_COVER_in2 in ('FEW') and CLD_TYPE_in2 in ('TCU')) then tcu:= 3;
elsif (CLD_COVER_in3 in ('FEW') and CLD_TYPE_in3 in ('TCU')) then tcu:= 3;
else tcu:= 0;
end if;
if (cb =
12) and instr(PRESENT_WEATHER_IN, '-SH')>0 then ts_hint := 18;
elsif (cb =
10 or tcu = 10) and instr(PRESENT_WEATHER_IN, '-SH')>0 then ts_hint := 12;
elsif (cb =
6 or tcu = 8) and instr(PRESENT_WEATHER_IN, '-SH')>0 then ts_hint := 10;
elsif (cb =
4 or tcu = 5) and instr(PRESENT_WEATHER_IN, '-SH')>0 then ts_hint := 8;
elsif (tcu = 3) and instr(PRESENT_WEATHER_IN, '-SH')>0 then ts_hint := 4;
elsif (cb = 12) and instr(PRESENT_WEATHER_IN, 'SH')>0 then ts_hint := 24;
elsif (cb = 10 or tcu = 10) and instr(PRESENT_WEATHER_IN, 'SH')>0 then ts_hint := 20;
elsif (cb = 6 or tcu = 8) and instr(PRESENT_WEATHER_IN, 'SH')>0 then ts_hint := 15;

33
elsif (cb = 4 or tcu = 5) and instr(PRESENT_WEATHER_IN, 'SH')>0 then ts_hint := 12;
elsif (tcu = 3) and instr(PRESENT_WEATHER_IN, 'SH')>0 then ts_hint := 6;
else ts_hint := 0;
end if;
answer := greatest(dangerous_phenomena,cb,tcu,ts_hint);

RETURN answer;
END get_convect_weather_code_v_2_3;

34

You might also like