Cdo Guide
Cdo Guide
Cdo Guide
2. Reference manual 28
2.1. Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.1.1. INFO - Information and simple statistics . . . . . . . . . . . . . . . . . . . . . . . . 30
2.1.2. SINFO - Short information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.1.3. XSINFO - Extra short information . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.1.4. DIFF - Compare two datasets field by field . . . . . . . . . . . . . . . . . . . . . . . 33
2.1.5. NINFO - Print the number of parameters, levels or times . . . . . . . . . . . . . . . 34
2.1.6. SHOWINFO - Show variables, levels or times . . . . . . . . . . . . . . . . . . . . . . 35
2.1.7. SHOWATTRIBUTE - Show attributes . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.1.8. FILEDES - Dataset description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.2. File operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.2.1. APPLY - Apply operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.2.2. COPY - Copy datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.2.3. TEE - Duplicate a data stream and write it to file . . . . . . . . . . . . . . . . . . . 41
2.2.4. PACK - Pack data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.2.5. UNPACK - Unpack data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2
Contents Contents
3
Contents Contents
4
Contents Contents
5
Contents Contents
3. Contributors 236
3.1. History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
3.2. External sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
3.3. Contributors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
Index 246
6
1. Introduction
The Climate Data Operator (CDO) software is a collection of many operators for standard processing of
climate and forecast model data. The operators include simple statistical and arithmetic functions, data
selection and subsampling tools, and spatial interpolation. CDO was developed to have the same set of
processing functions for GRIB [GRIB] and NetCDF [NetCDF] datasets in one package.
The Climate Data Interface [CDI] is used for the fast and file format independent access to GRIB and
NetCDF datasets. The local MPI-MET data formats SERVICE, EXTRA and IEG are also supported.
There are some limitations for GRIB and NetCDF datasets:
GRIB datasets have to be consistent, similar to NetCDF. That means all time steps need to have the same
variables, and within a time step each variable may occur only once. Multiple fields in single GRIB2
messages are not supported!
NetCDF datasets are only supported for the classic data model and arrays up to 4 dimensions. These
dimensions should only be used by the horizontal and vertical grid and the time. The NetCDF
attributes should follow the GDT, COARDS or CF Conventions.
The main CDO features are:
• More than 700 operators available
• Modular design and easily extendable with new operators
• Very simple UNIX command line interface
• A dataset can be processed by several operators, without storing the interim results in files
• Most operators handle datasets with missing values
• Fast processing of large datasets
• Support of many different grid types
• Tested on many UNIX/Linux systems, Cygwin, and MacOS-X
Latest pdf documentation be found here.
1.1. Installation
CDO is supported in different operative systems such as Unix, macOS and Windows. This section describes
how to install CDO in those platforms. More examples are found on the main website (https://code.
mpimet.mpg.de/projects/cdo/wiki)
1.1.1. Unix
1.1.1.1. Prebuilt CDO packages
Prebuilt CDO versions are available in online Unix repositories, and you can install them by typing on the
Unix terminal
Note that prebuilt libraries do not offer the most recent version, and their version might vary with the
Unix system (see table below). It is recommended to build from the source or Conda environment for an
updated version or a customised setting.
7
Installation Introduction
CDO uses the GNU configure and build system for compilation. The only requirement is a working ISO
C++17 and C11 compiler.
First go to the download page (https://code.mpimet.mpg.de/projects/cdo) to get the latest distribu-
tion, if you do not have it yet.
To take full advantage of CDO features the following additional libraries should be installed:
CDO is a multi-threaded application. Therefore all the above libraries should be compiled thread safe.
Using non-threadsafe libraries could cause unexpected errors!
Compilation
./configure
8
Introduction Installation
./configure --help
make
The program should compile without problems and the binary (cdo) should be available in the src
directory of the distribution.
Installation
After the compilation of the source code do a make install, possibly as root if the destination
permissions require that.
make install
The binary is installed into the directory <prefix>/bin. <prefix> defaults to /usr/local but
can be changed with the --prefix option of the configure script.
Alternatively, you can also copy the binary from the src directory manually to some bin directory
in your search path.
1.1.1.3. Conda
Conda is an open-source package manager and environment management system for various languages
(Python, R, etc.). Conda is installed via Anaconda or Miniconda. Unlike Anaconda, miniconda is a
lightweight conda distribution. They can be dowloaded from the main conda Website (https://conda.
io/projects/conda/en/latest/user-guide/install/linux.html) or on the terminal
wget https://repo.anaconda.com/archive/Anaconda3-2021.11-Linux-x86_64.sh
bash Anaconda3-2021.11-Linux-x86_64.sh
source ~/.bashrc
and
wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
sh Miniconda3-latest-Linux-x86_64.sh
Upon setting your conda environment, you can install CDO using conda
1.1.2. MacOS
Among the MacOS package managers, CDO can be installed from Homebrew and Macports. The instal-
lation via Homebrew is straight forward process on the terminal
9
Installation Introduction
Similarly, Macports
In contrast to Homebrew, Macport allows you to enable GRIB2, szip compression and Magics++ graphic
in CDO installation.
In addition, you could also set CDO via Conda as Unix. You can follow this tutorial to install anaconda
or miniconda in your computer (https://conda.io/projects/conda/en/latest/user-guide/install/
macos.html). Then, you can install cdo by
1.1.3. Windows
Currently, CDO is not supported in Windows system and the binary is not available in the windows conda
repository. Therefore, CDO needs to be set in a virtual environment. Here, it covers the installation of
CDO using Windows Subsystem Linux (WSL) and virtual machines.
1.1.3.1. WSL
WSL emulates Unix in your Windows system. Then, you can install Unix libraries and software such
as CDO or the linux conda distribution in your computer. Also, it allows you to directly share your
files between your Windows and the WSL environment. However, more complex functions that require a
graphic interface are not allowed.
In Windows 10 or newer, WSL can be readily set in your cmd by typing
wsl --install
This command will install, by default, Ubuntu 20.04 in WSL2. You could also choose a different system
from this list.
wsl -l -o
Virtual machines can emulate different operative systems in your computer. Virtual machines are guest
computers mounted inside your host computer. You can set a Linux distribution in your Windows device
in this particular case. The advantages of Virtual machines to WSL are the graphical interface and the
fully operational Linux system. You can follow any tutorial on the internet such as this one
https://ubuntu.com/tutorials/how-to-run-ubuntu-desktop-on-a-virtual-machine-using-virtualbox#
1-overview
Finally, you can install CDO following any method listed in the section 1.1.1.
10
Introduction Usage
1.2. Usage
This section descibes how to use CDO. The syntax is:
1.2.1. Options
All options have to be placed before the first operator. The following options are available for all operators:
11
Usage Introduction
--percentile <method>
Methods: nrank, nist, rtype8, <NumPy method (linear|lower|higher|nearest|...)>
--reduce_dim Reduce NetCDF dimensions.
-R, --regular Convert GRIB1 data from global reduced to regular Gaussian grid (only with cgribex lib).
-r Generate a relative time axis.
-S Create an extra output stream for the module TIMSTAT. This stream contains
the number of non missing values for each output period.
-s, --silent Silent mode.
--shuffle Specify shuffling of variable data bytes before compression (NetCDF).
--single Using single precision floats for data in memory.
--sortname Alphanumeric sorting of NetCDF parameter names.
-t <partab> Set the GRIB1 (cgribex) default parameter table name or file (see chapter 1.6 on page 24).
Predefined tables are: echam4 echam5 echam6 mpiom1 ecmwf remo
--timestat_date <srcdate>
Target timestamp (temporal statistics): first, middle, midhigh or last source timestep.
-V, --version Print the version number.
-v, --verbose Print extra details for some operators.
-w Disable warning messages.
--worker <num> Number of worker to decode/decompress GRIB records.
-z aec AEC compression of GRIB1 records.
jpeg JPEG compression of GRIB2 records.
zip[_1-9] Deflate compression of NetCDF4 variables.
zstd[_1-19] Zstandard compression of NetCDF4 variables.
There are some environment variables which influence the behavior of CDO. An incomplete list can be
found in Appendix A.
Here is an example to set the envrionment variable CDO_RESET_HISTORY for different shells:
1.2.3. Operators
There are more than 700 operators available. A detailed description of all operators can be found in the
Reference Manual section.
Some of the CDO operators are shared memory parallelized with OpenMP. An OpenMP-enabled C compiler
is needed to use this feature. Users may request a specific number of OpenMP threads nthreads with the
’ -P’ switch.
Here is an example to distribute the bilinear interpolation on 8 OpenMP threads:
Many CDO operators are I/O-bound. This means most of the time is spend in reading and writing the
data. Only compute intensive CDO operators are parallelized. An incomplete list of OpenMP parallelized
operators can be found in Appendix B.
12
Introduction Usage
Some operators need one or more parameter. A list of parameter is indicated by the seperator ’,’.
• STRING
String parameters require quotes if the string contains blanks or other characters interpreted by the
shell. The following command select variables with the name pressure and tsurf:
cdo selvar,pressure,tsurf infile outfile
• FLOAT
Floating point number in any representation. The following command sets the range between 0 and
273.15 of all fields to missing value:
cdo setrtomiss,0,273.15 infile outfile
• BOOL
Boolean parameter in the following representation TRUE/FALSE, T/F or 0/1. To disable the weight-
ing by grid cell area in the calculation of a field mean, use:
cdo fldmean,weights=FALSE infile outfile
• INTEGER
A range of integer parameter can be specified by first/last[/inc]. To select the days 5, 6, 7, 8 and 9
use:
cdo selday,5/9 infile outfile
The result is the same as:
cdo selday,5,6,7,8,9 infile outfile
Operator chaining allows to combine two or more operators on the command line into a single CDO call.
This allows the creation of complex operations out of more simple ones: reductions over several dimensions,
file merges and all kinds of analysis processes. All operators with a fixed number of input streams and
one output stream can pass the result directly to an other operator. For differentiation between files and
operators all operators must be written with a prepended "–" when chaining.
cdo -monmean -add -mulc,2.0 infile1 -daymean infile2 outfile (CDO example call)
Here monmean will have the output of add while add takes the output of mulc,2.0 and daymean. infile1
and infile2 are inputs for their predecessor. When mixing operators with an arbitrary number of input
streams extra care needs to be taken. The following examples illustrates why.
1. cdo info -timavg infile1 infile2
2. cdo info -timavg infile?
3. cdo timavg infile1 tmpfile
cdo info tmpfile infile2
rm tmpfile
All three examples produce identical results. The time average will be computed only on the first input file.
Note(1): In section 1.3.2 we introduce argument groups which will make this a lot easier and less er-
ror prone.
Note(2): Operator chaining is implemented over POSIX Threads (pthreads). Therefore this CDO feature
is not available on operating systems without POSIX Threads support!
13
Advanced Usage Introduction
Combining operators can have several benefits. The most obvious is a performance increase through
reducing disk I/O:
instead of
Especially with large input files the reading and writing of intermediate files can have a big influence on
the overall performance.
A second aspect is the execution of operators: Limited by the algorythms potentially all operators of a
chain can run in parallel.
In this section we will introduce advanced features of CDO. These include operator grouping which allows
to write more complex CDO calls and the apply keyword which allows to shorten calls that need an operator
to be executed on multiple files as well as wildcards which allow to search paths for file signatures. These
features have several restrictions and follow rules that depend on the input/output properties. These
required properties of operators can be investigated with the following commands which will output a list
of operators that have selected properties:
• arbitrary describes all operators where the number of inputs is not defined.
• filesOnly are operators that can have other operators as input.
• onlyFirst shows which operators can only be at the most left position of the polish notation argument
chain.
• noOutput are all operators that do not print to any file (e.g info)
• obase Here obase describes an operator that does not use the output argument as file but e.g as a file
name base (output base). This is almost exclusivly used for operators the split input files.
For checking a single or multiple operator directly the following usage of --attribs can be used:
1.3.1. Wildcards
Wildcards are a standard feature of command line interpreters (shells) on many operating systems. They
are placeholder characters used in file paths that are expanded by the interpreter into file lists. For further
information the Advance Bash Scripting Guide is a valuable source of information. Handling of input is
a central issue for CDO and in some circumstances it is not enough to use the wildcards from the shell.
That’s why CDO can handle them on its own.
14
Introduction Advanced Usage
In earlier versions of CDO this was necessary to have the right files parsed to the right operator. Newer
version support this with the argument grouping feature (see 1.3.2). We advice the use of the grouping
mechanism instead of the single quoted wildcards since this feature could be deprecated in future versions.
Note: Wildcard expansion is not available on operating systems without the glob() function!
In section 1.2.6 we described that it is not possible to chain operators with an arbitrary number of inputs.
In this section we want to show how this can be achieved through the use of operator grouping with angled
brackets []. Using these brackets CDO can assigned the inputs to their corresponding operators during
the execution of the command line. The ability to write operator combination in a parenthis-free way is
partly given up in favor of allowing operators with arbitrary number of inputs. This allows a much more
compact way to handle large number of input files.
The following example shows an example which we will transform from a non-working solution to a working
one.
cdo -infon -div -fldmean -cat infileA -mulc,2.0 infileB -fldmax infileC
cdo (Warning): Did you forget to use ’[’ and/or ’]’ for multiple variable input operators?
cdo (Warning): use option --variableInput, for description
cdo (Abort): Too few streams specified! Operator div needs 2 input streams and 1 output stream!
The error is raised by the operator div. This operator needs two input streams and one output stream,
but the cat operator has claimed all possible streams on its right hand side as input because it accepts an
arbitrary number of inputs. Hence it didn’t leave anything for the remaining input or output streams of
div. For this we can declare a group which will be passed to the operator left of the group.
cdo -infon -div -fldmean -cat [ infileA -mulc,2.0 infileB ] -fldmax infileC
cdo -infon -div -fldmean -cat [ infileA infileB -merge [ infileC1 infileC2 ] ] -fldmax infileD
15
Advanced Usage Introduction
When working with medium or large number of similar files there is a common problem of a processing
step (often a reduction) which needs to be performed on all of them before a more specific analysis can be
applied. Ususally this can be done in two ways: One option is to use merge to glue everything together and
chain the reduction step after it. The second option is to write a for-loop over all inputs which perform
the basic processing on each of the files separately and call merge one the results. Unfortunately both
options have side-effects: The first one needs a lot of memory because all files are read in completely and
reduced afterwards while the latter one creates a lot of temporary files. Both memory and disk IO can be
bottlenecks and should be avoided.
The apply keyword was introduced for that purpose. It can be used as an operator, but it needs at least
one operator as a parameter, which is applied in parallel to all related input streams in a parallel way
before all streams are passed to operator next in the chain.
The following is an example with three input files:
Apply is especially useful when combined with wildcards. The previous example can be shortened further.
As shown this feature allows to simplify commands with medium amount of files and to move reductions
further back. This can also have a positive impact on the performance.
Apply saves the day. And creates the call above with much less typing.
cdo -yearmean -merge [ -apply,-daymean [ f1 ... f40 ] ]
In the example in figure 1.2 the resulting call will dramatically save process interaction as well as execution
times since the reduction (daymean) is applied on the files first. That means that the merge operator will
receive the reduced files and the operations for merging the whole data is saved. For other CDO calls
further improvements can be made by adding more arguments to apply (1.3)
16
Introduction Memory Requirements
Restrictions: While the apply keyword can be extremely helpful it has several restrictions (for now!).
• Apply inputs can only be files, wildcards and operators that have 0 inputs and 1 output.
• Apply can not be used as the first CDO operator.
• Apply arguments can only be operators with 1 input and 1 output.
• Grouping inside the Apply argument or input is not allowed.
One single point of a horizontal grid represents the mean of a grid cell. These grid cells are typically of
different sizes, because the grid points are of varying distance.
Area weights are individual weights for each grid cell. They are needed to compute the area weighted
mean or variance of a set of grid cells (e.g. fldmean - the mean value of all grid cells). In CDO the area
weights are derived from the grid cell area. If the cell area is not available then it will be computed from
the geographical coordinates via spherical triangles. This is only possible if the geographical coordinates of
the grid cell corners are available or derivable. Otherwise CDO gives a warning message and uses constant
area weights for all grid cells.
The cell area is read automatically from a NetCDF input file if a variable has the corresponding “cell_measures”
attribute, e.g.:
17
Horizontal grids Introduction
If the computed cell area is not desired then the CDO operator setgridarea can be used to set or overwrite
the grid cell area.
Predefined grids are available for global regular, gaussian, HEALPix or icosahedral-hexagonal GME grids.
global_<DXY> defines a global regular lon/lat grid. The grid increment <DXY> can be chosen arbitrarily.
The longitudes start at <DXY>/2 - 180◦ and the latitudes start at <DXY>/2 - 90◦ .
dcw:<CountryCode>[_<DXY>] defines a regional regular lon/lat grid from the country code. The default
value of the optional grid increment <DXY> is 0.1 degree. The ISO two-letter country codes can be found
on https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2. To define a state, append the state code to the
country code, e.g. USAK for Alaska. For the coordinates of a country CDO uses the DCW (Digital Chart
of the World) dataset from GMT. This dataset must be installed on the system and the environment
variable DIR_DCW must point to it.
zonal_<DY> defines a grid with zonal latitudes only. The latitude increment <DY> can be chosen arbitrarily.
The latitudes start at <DY>/2 - 90◦ . The boundaries of each latitude are also generated. The number of
longitudes is 1. A grid description of this type is needed to calculate the zonal mean (zonmean) for data
on an unstructured grid.
r<NX>x<NY> defines a global regular lon/lat grid. The number of the longitudes <NX> and the latitudes
<NY> can be chosen arbitrarily. The longitudes start at 0◦ with an increment of (360/<NX>)◦ . The latitudes
go from south to north with an increment of (180/<NY>)◦ .
18
Introduction Horizontal grids
F<XXX> defines a global regular Gaussian grid. XXX specifies the number of latitudes lines between the Pole
and the Equator. The longitudes start at 0◦ with an increment of (360/nlon)◦ . The gaussian latitudes go
from north to south.
gme<NI> defines a global icosahedral-hexagonal GME grid. NI specifies the number of intervals on a main
triangle side.
You can use the grid description from an other datafile. The format of the datafile and the grid of the data
field must be supported by CDO. Use the operator ’sinfo’ to get short informations about your variables
and the grids. If there are more then one grid in the datafile the grid description of the first variable will
be used. Add the extension :N to the name of the datafile to select grid number N.
SCRIP (Spherical Coordinate Remapping and Interpolation Package) uses a common grid description for
curvilinear and unstructured grids. For more information about the convention see [SCRIP]. This grid
description is stored in NetCDF. Therefor it is only available if CDO was compiled with NetCDF support!
SCRIP grid description example of a curvilinear MPIOM [MPIOM] GROB3 grid (only the NetCDF header):
netcdf grob3s {
dimensions :
g r i d _ s i z e = 12120 ;
grid_corners = 4 ;
grid_rank = 2 ;
variables :
i n t grid_dims ( g r i d _ r a n k ) ;
double grid_center_lat ( g r i d _ s i z e ) ;
grid_center_lat : units = " degrees " ;
g r i d _ c e n t e r _ l a t : bounds = " g r i d _ c o r n e r _ l a t " ;
double grid_center_lon ( g r i d _ s i z e ) ;
grid_center_lon : units = " degrees " ;
g r i d _ c e n t e r _ l o n : bounds = " g r i d _ c o r n e r _ l o n " ;
i n t grid_imask ( g r i d _ s i z e ) ;
grid_imask : u n i t s = " u n i t l e s s " ;
grid_imask : c o o r d i n a t e s = " g r i d _ c e n t e r _ l o n g r i d _ c e n t e r _ l a t " ;
double grid_corner_lat ( grid_size , grid_corners ) ;
grid_corner_lat : units = " degrees " ;
double grid_corner_lon ( grid_size , grid_corners ) ;
grid_corner_lon : units = " degrees " ;
19
Horizontal grids Introduction
// g l o b a l a t t r i b u t e s :
: t i t l e = " grob3s " ;
}
All supported grids can also be described with the CDO grid description. The following keywords can be
used to describe a grid:
Which keywords are necessary depends on the gridtype. The following table gives an overview of the
default values or the size with respect to the different grid types.
The keywords nvertex, xbounds and ybounds are optional if area weights are not needed. The grid cell
corners xbounds and ybounds have to rotate counterclockwise.
20
Introduction Horizontal grids
CDO grid description example of a global regular grid with 60x30 points:
gridtype = lonlat
xsize = 60
ysize = 30
xfirst = −177
xinc = 6
yfirst = −87
yinc = 6
The description for a projection is somewhat more complicated. Use the first section to describe the
coordinates of the projection with the above keywords. Add the keyword grid_mapping_name to
descibe the mapping between the given coordinates and the true latitude and longitude coordinates.
grid_mapping_name takes a string value that contains the name of the projection. A list of attributes
can be added to define the mapping. The name of the attributes depend on the projection. The valid
names of the projection and there attributes follow the NetCDF CF-Convention.
CDO supports the special grid mapping attribute proj_params. These parameter will be passed directly
to the PROJ library to generate the geographic coordinates if needed.
The geographic coordinates of the following projections can be generated without the attribute proj_params,
if all other attributes are available:
• rotated_latitude_longitude
• lambert_conformal_conic
• lambert_azimuthal_equal_area
• sinusoidal
• polar_stereographic
It is recommend to set the attribute proj_params also for the above projections to make sure all PROJ
parameter are set correctly.
Here is an example of a CDO grid description using the attribute proj_params to define the PROJ
parameter of a polar stereographic projection:
gridtype = projection
xsize = 11
ysize = 11
xunits = " meter "
yunits = " meter "
xfirst = −638000
xinc = 150
yfirst = −3349350
yinc = 150
grid_mapping = c r s
grid_mapping_name = p o l a r _ s t e r e o g r a p h i c
proj_params = "+ p r o j=s t e r e +lon_0=−45 +l a t _ t s =70 +l a t _ 0 =90 +x_0=0 +y_0=0"
The result is the same as using the CF conform Grid Mapping Attributes:
gridtype = projection
xsize = 11
ysize = 11
xunits = " meter "
yunits = " meter "
xfirst = −638000
xinc = 150
yfirst = −3349350
yinc = 150
grid_mapping = c r s
grid_mapping_name = p o l a r _ s t e r e o g r a p h i c
s t r a i g h t _ v e r t i c a l _ l o n g i t u d e _ f r o m _ p o l e = −45.
standard_parallel = 70.
latitude_of_projection_origin = 90.
false_easting = 0.
false_northing = 0.
21
Z-axis description Introduction
Example CDO descriptions of a curvilinear and an unstructured grid can be found in Appendix D.
ICON model data in NetCDF format contains the global attribute grid_file_uri. This attribute contains
a link to the appropriate grid file on the ICON grid file server. If the global attribute grid_file_uri is
present and valid, the grid information can be added automatically. The setgrid function is then no longer
required. The environment variable CDO_DOWNLOAD_PATH can be used to select a directory for storing the
grid file. If this environment variable is set, the grid file will be automatically downloaded from the grid file
server to this directory if needed. If the grid file already exists in the current directory, the environment
variable does not need to be set.
If the grid files are available locally, like at DKRZ, they do not need to be fetched from the grid file server.
Use the environment variable CDO_ICON_GRIDS to set the root directory of the ICON grids. Here is an
example for the ICON grids at DKRZ:
CDO_ICON_GRIDS=/pool/data/ICON
The keywords lbounds and ubounds are optional. vctsize and vct are only necessary to define hybrid
model levels.
22
Introduction Time axis
Z-axis description example for pressure levels 100, 200, 500, 850 and 1000 hPa:
zaxistype = pressure
size = 5
levels = 10000 20000 50000 85000 100000
Note that the vctsize is twice the number of levels plus two and the vertical coordinate table must be
specified for the level interfaces.
A time axis describes the time for every timestep. Two time axis types are available: absolute time and
relative time axis. CDO tries to maintain the actual type of the time axis for all operators.
An absolute time axis has the current time to each time step. It can be used without knowledge of the
calendar. This is preferably used by climate models. In NetCDF files the absolute time axis is represented
by the unit of the time: "day as %Y%m%d.%f".
A relative time is the time relative to a fixed reference time. The current time results from the reference time
and the elapsed interval. The result depends on the calendar used. CDO supports the standard Gregorian,
proleptic Gregorian, 360 days, 365 days and 366 days calendars. The relative time axis is preferably used
by numerical weather prediction models. In NetCDF files the relative time axis is represented by the unit
of the time: "time-units since reference-time", e.g "days since 1989-6-15 12:00".
23
Parameter table Introduction
Some programs which work with NetCDF data can only process relative time axes. Therefore it may be
necessary to convert from an absolute into a relative time axis. This conversion can be done for each
operator with the CDO option ’-r’. To convert a relative into an absolute time axis use the CDO option
’-a’.
A parameter table is an ASCII formated file to convert code numbers to variable names. Each variable
has one line with its code number, name and a description with optional units in a blank separated list. It
can only be used for GRIB, SERVICE, EXTRA and IEG formated files. The CDO option ’-t <partab>’
sets the default parameter table for all input files. Use the operator ’setpartab’ to set the parameter table
for a specific file.
Missing values are data points that are missing or invalid. Such data points are treated in a different way
than valid data. Most CDO operators can handle missing values in a smart way. But if the missing value
is within the range of valid data, it can lead to incorrect results. This applies to all arithmetic operations,
but especially to logical operations when the missing value is 0 or 1.
The default missing value for GRIB, SERVICE, EXTRA and IEG files is −9.e33 . The CDO option ’-m
<missval>’ overwrites the default missing value. In NetCDF files the variable attribute ’_FillValue’ is
used as a missing value. The operator ’setmissval’ can be used to set a new missing value.
The CDO use of the missing value is shown in the following tables, where one table is printed for each
operation. The operations are applied to arbitrary numbers a, b, the special case 0, and the missing value
miss. For example the table named "addition" shows that the sum of an arbitrary number a and the
missing value is the missing value, and the table named "multiplication" shows that 0 multiplied by missing
value results in 0.
24
Introduction Percentile
addition b miss
a a+b miss
miss miss miss
subtraction b miss
a a−b miss
miss miss miss
multiplication b 0 miss
a a∗b 0 miss
0 0 0 0
miss miss 0 miss
division b 0 miss
a a/b miss miss
0 0 miss miss
miss miss miss miss
maximum b miss
a max(a, b) a
miss b miss
minimum b miss
a min(a, b) a
miss b miss
sum b miss
a a+b a
miss b miss
The handling of missing values by the operations "minimum" and "maximum" may be surprising, but the
definition given here is more consistent with that expected in practice. Mathematical functions (e.g. log,
sqrt, etc.) return the missing value if an argument is the missing value or an argument is out of range.
All statistical functions ignore missing values, treading them as not belonging to the sample, with the
side-effect of a reduced sample size.
An artificial distinction is made between the notions mean and average. The mean is regarded as a
statistical function, whereas the average is found simply by adding the sample members and dividing the
result by the sample size. For example, the mean of 1, 2, miss and 3 is (1 + 2 + 3)/3 = 2, whereas the
average is (1 + 2 + miss + 3)/4 = miss/4 = miss. If there are no missing values in the sample, the average
and mean are identical.
1.10. Percentile
There is no standard definition of percentile. All definitions yield to similar results when the number of
values is very large. The following percentile methods are available in CDO:
25
Regions Introduction
Percentile
Description
method
nrank Nearest Rank method [default in CDO]
nist The primary method recommended by NIST
rtype8 R’s type=8 method
inverted_cdf NumPy with percentile method=’inverted_cdf’ (R type=1)
averaged_inverted_cdf NumPy with percentile method=’averaged_inverted_cdf’ (R type=2)
closest_observation NumPy with percentile method=’closest_observation’ (R type=3)
interpolated_inverted_cdf NumPy with percentile method=’interpolated_inverted_cdf’ (R type=4)
hazen NumPy with percentile method=’hazen’ (R type=5)
weibull NumPy with percentile method=’weibull’ (R type=6)
linear NumPy with percentile method=’linear’ (R type=7) [default in NumPy and R]
median_unbiased NumPy with percentile method=’median_unbiased’ (R type=8)
normal_unbiased NumPy with percentile method=’normal_unbiased’ (R type=9)
lower NumPy with percentile method=’lower’
higher NumPy with percentile method=’higher’
midpoint NumPy with percentile method=’midpoint’
nearest NumPy with percentile method=’nearest’
The percentile method can be selected with the CDO option - -percentile. The Nearest Rank method
is the default percentile method in CDO.
The different percentile methods can lead to different results, especially for small number of data values.
Consider the ordered list {15, 20, 35, 40, 50, 55}, which contains six data values. Here is the result for the
30th, 40th, 50th, 75th and 100th percentiles of this list using the different percentile methods:
The amount of data for time series can be very large. All data values need to held in memory to calculate
the percentile. The percentile over timesteps uses a histogram algorithm, to limit the amount of required
memory. The default number of histogram bins is 101. That means the histogram algorithm is used,
when the dataset has more than 101 time steps. The default can be overridden by setting the environment
variable CDO_PCTL_NBINS to a different value. The histogram algorithm is implemented only for the Nearest
Rank method.
1.11. Regions
The CDO operators maskregion and selregion can be used to mask and select regions. For this purpose,
the region needs to be defined by the user. In CDO there are two possibilities to define regions.
One possibility is to define the regions with an ASCII file. Each region is defined by a convex polygon.
Each line of the polygon contains the longitude and latitude coordinates of a point. A description file for
regions can contain several polygons, these must be separated by a line with the character &.
26
Introduction Regions
Here is a simple example of a polygon for a box with longitudes from 120W to 90E and latitudes from 20N
to 20S:
120 20
120 −20
270 −20
270 20
With the second option, predefined regions can be used via country codes. A country is specified with
dcw:<CountryCode>. Country codes can be combined with the plus sign.
27
2. Reference manual
This section gives a description of all operators. Related operators are grouped to modules. For easier
description all single input files are named infile or infile1, infile2, etc., and an arbitrary number of
input files are named infiles. All output files are named outfile or outfile1, outfile2, etc. Further
the following notion is introduced:
i(t) Timestep t of infile
i(t, x) Element number x of the field at timestep t of infile
o(t) Timestep t of outfile
o(t, x) Element number x of the field at timestep t of outfile
28
Reference manual Information
2.1. Information
This section contains modules to print information about datasets. All operators print there results to
standard output.
Here is a short overview of all operators in this section:
29
Information Reference manual
Synopsis
Description
This module writes information about the structure and contents for each field of all input files to
standard output. A field is a horizontal layer of a data variable. All input files need to have the same
structure with the same variables on different timesteps. The information displayed depends on the
chosen operator.
Operators
Example
To print information and simple statistics for each field of a dataset use:
cdo infon infile
30
Reference manual Information
Synopsis
< operator > infiles
Description
This module writes information about the structure of infiles to standard output. infiles is an
arbitrary number of input files. All input files need to have the same structure with the same variables
on different timesteps. The information displayed depends on the chosen operator.
Operators
sinfo Short information listed by parameter identifier
Prints short information of a dataset. The information is divided into 4 sections. Section
1 prints one line per parameter with the following information:
• institute and source
• time c=constant v=varying
• type of statistical processing
• number of levels and z-axis number
• horizontal grid size and number
• data type
• parameter identifier
Section 2 and 3 gives a short overview of all grid and vertical coordinates. And the last
section contains short information of the time coordinate.
sinfon Short information listed by parameter name
The same as operator sinfo but using the name instead of the identifier to label the
parameter.
Example
To print short information of a dataset use:
cdo sinfon infile
31
Information Reference manual
Synopsis
< operator > infiles
Description
This module writes information about the structure of infiles to standard output. infiles is an
arbitrary number of input files. All input files need to have the same structure with the same variables
on different timesteps. The information displayed depends on the chosen operator.
Operators
xsinfo Extra short information listed by parameter name
Prints short information of a dataset. The information is divided into 4 sections. Section
1 prints one line per parameter with the following information:
• institute and source
• time c=constant v=varying
• type of statistical processing
• number of levels and z-axis number
• horizontal grid size and number
• data type
• memory type (float or double)
• parameter name
Section 2 to 4 gives a short overview of all grid, vertical and time coordinates.
xsinfop Extra short information listed by parameter identifier
The same as operator xsinfo but using the identifier instead of the name to label the
parameter.
Example
To print extra short information of a dataset use:
cdo xsinfo infile
32
Reference manual Information
Synopsis
Description
Compares the contents of two datasets field by field. The input datasets need to have the same
structure and its fields need to have the dimensions. Try the option names if the number of variables
differ. Exit status is 0 if inputs are the same and 1 if they differ.
Operators
Parameter
maxcount INTEGER Stop after maxcount different fields
abslim FLOAT Limit of the maximum absolute difference (default: 0)
rellim FLOAT Limit of the maximum relative difference (default: 1)
names STRING Consideration of the variable names of only one input file (left/right) or
the intersection of both (intersect).
Example
To print the difference for each field of two datasets use:
cdo diffn infile1 infile2
This is an example result of two datasets with one 2D parameter over 12 timesteps:
33
Information Reference manual
Synopsis
Description
This module prints the number of variables, levels or times of the input dataset.
Operators
Example
To print the number of parameters (variables) in a dataset use:
cdo npar infile
34
Reference manual Information
Synopsis
Description
This module prints the format, variables, levels or times of the input dataset.
Operators
Example
To print the code number of all variables in a dataset use:
cdo showcode infile
35
Information Reference manual
Synopsis
showattribute[,attributes] infile
Description
This operator prints the attributes of the data variables of a dataset.
Each attribute has the following structure:
[var_nm@][att_nm]
The value of var_nm is the name of the variable containing the attribute (named att_nm) that
you want to print. Use wildcards to print the attribute att_nm of more than one variable. A value
of var_nm of ’*’ will print the attribute att_nm of all data variables. If var_nm is missing then
att_nm refers to a global attribute.
The value of att_nm is the name of the attribute you want to print. Use wildcards to print more
than one attribute. A value of att_nm of ’*’ will print all attributes.
Parameter
attributes STRING Comma-separated list of attributes.
36
Reference manual Information
Synopsis
Description
This module provides operators to print meta information about a dataset. The printed meta-data
depends on the chosen operator.
Operators
Example
Assume all variables of the dataset are on a Gausssian N16 grid. To print the grid description of this
dataset use:
cdo griddes infile
Result:
gridtype : gaussian
gridsize : 2048
xname : lon
xlongname : longitude
xunits : degrees_east
yname : lat
ylongname : latitude
yunits : degrees_north
xsize : 64
ysize : 32
xfirst : 0
xinc : 5.625
yvals : 85.76058 80.26877 74.74454 69.21297 63.67863 58.1429 52.6065
47.06964 41.53246 35.99507 30.4575 24.91992 19.38223 13.84448
8 . 3 0 6 7 0 2 2 . 7 6 8 9 0 3 −2.768903 −8.306702 −13.84448 −19.38223
−24.91992 −30.4575 −35.99507 −41.53246 −47.06964 −52.6065
−58.1429 −63.67863 −69.21297 −74.74454 −80.26877 −85.76058
37
File operations Reference manual
38
Reference manual File operations
Synopsis
apply,operators infiles
Description
The apply utility runs the named operators on each input file. The input files must be enclosed in
square brackets. This utility can only be used on a series of input files. These are all operators with
more than one input file (infiles). Here is an incomplete list of these operators: copy, cat, merge,
mergetime, select, ENSSTAT. The parameter operators is a blank-separated list of CDO operators.
Use quotation marks if more than one operator is needed. Each operator may have only one input
and output stream.
Parameter
operators STRING Blank-separated list of CDO operators.
Example
Suppose we have multiple input files with multiple variables on different time steps. The input files
contain the variables U and V, among others. We are only interested in the absolute windspeed on
all time steps. Here is the standard CDO solution for this task:
cdo expr,wind="sqrt(u*u+v*v)" -mergetime infile1 infile2 infile3 outfile
This first joins all the time steps together and then calculates the wind speed. If there are many
variables in the input files, this procedure is ineffective. In this case it is better to first calculate the
wind speed:
cdo mergetime -expr,wind="sqrt(u*u+v*v)" infile1 \
-expr,wind="sqrt(u*u+v*v)" infile2 \
-expr,wind="sqrt(u*u+v*v)" infile3 outfile
However, this can quickly become very confusing with more than 3 input files. The apply operator
solves this problem:
cdo mergetime -apply,-expr,wind="sqrt(u*u+v*v)" [ infile1 infile2 infile3 ] outfile
Another example is the calculation of the mean value over several input files with ensmean. The input
files contain several variables, but we are only interested in the variable named XXX:
cdo ensmean -apply,-selname,XXX [ infile1 infile2 infile3 ] outfile
39
File operations Reference manual
Synopsis
Description
This module contains operators to copy, clone or concatenate datasets. infiles is an arbitrary
number of input files. All input files need to have the same structure with the same variables on
different timesteps.
Operators
Example
To change the format of a dataset to NetCDF use:
cdo -f nc copy infile outfile.nc
Add the option ’-r’ to create a relative time axis, as is required for proper recognition by GrADS or
Ferret:
cdo -r -f nc copy infile outfile.nc
If the output dataset already exists and you wish to extend it with more timesteps use:
cdo cat infile1 infile2 infile3 outfile
40
Reference manual File operations
Synopsis
Description
This operator copies the input dataset to outfile1 and outfile2. The first output stream in
outfile1 can be further processesd with other cdo operators. The second output outfile2 is written
to disk. It can be used to store intermediate results to a file.
Parameter
outfile2 STRING Destination filename for the copy of the input file
Example
To compute the daily and monthy average of a dataset use:
cdo monavg -tee,outfile_dayavg dayavg infile outfile_monavg
Synopsis
Description
Packing reduces the data volume by reducing the precision of the stored numbers. It is implemented
using the NetCDF attributes add_offset and scale_factor. The operator pack calculates the
attributes add_offset and scale_factor for all variables. The default data type for all variables is
automatically changed to 16-bit integer. Use the CDO option -b to change the data type to a different
integer precision, if needed. Missing values are automatically transformed to the current data type.
Synopsis
Description
Packing reduces the data volume by reducing the precision of the stored numbers. It is implemented
using the NetCDF attributes add_offset and scale_factor. The operator unpack unpack all packed
variables. The default data type for all variables is automatically changed to 32-bit floats. Use the
CDO option -b F64 to change the data type to 64-bit floats, if needed.
41
File operations Reference manual
Synopsis
Description
This operator calculates for each field the number of necessary mantissa bits to get a certain infor-
mation level in the data. With this number of significant bits (numbits) a rounding of the data is
performed. This allows the data to be compressed to a higher level.
The default value of the information level is 0.9999 and can be adjusted with the parameter inflevel.
That means 99.99% of the information in the mantissa bits is preserved.
Alternatively, the number of significant bits can be set for all variables with the numbits parameter.
Furthermore, numbits can be assigned for each variable via the filename parameter. In this case,
numbits is still calculated for all variables if they are not present in the file.
The analysis of the bit information is based on the Julia library BitInformation.jl. The procedure to
derive the number of significant mantissa bits was adapted from the Python library xbitinfo. Quantize
to the number of mantissa bits is done with IEEE rounding using code from NetCDF 4.9.0.
Currently only 32-bit float data is rounded. Data with missing values are not yet supported for the
calculation of significant bits.
Parameter
inflevel FLOAT Information level (0 - 1) [default: 0.9999]
addbits INTEGER Add bits to the number of significant bits [default: 0]
minbits INTEGER Minimum value of the number of bits [default: 1]
maxbits INTEGER Maximum value of the number of bits [default: 23]
numsteps INTEGER Set to 1 to run the calculation only in the first time step
numbits INTEGER Set number of significant bits
printbits BOOL Print max. numbits per variable of 1st timestep to stdout [format: name=numbits]
filename STRING Read number of significant bits per variable from file [format: name=numbits]
Example
Apply bit rounding to all 32-bit float fields, preserving 99.9% of the information, followed by com-
pression and storage to NetCDF4:
cdo -f nc4 -z zip bitrounding,inflevel=0.999 infile outfile
Add the option ’-v’ to view used number of mantissa bits for each field:
cdo -v -f nc4 -z zip bitrounding,inflevel=0.999 infile outfile
42
Reference manual File operations
Synopsis
Description
This operator replaces variables in infile1 by variables from infile2 and write the result to outfile.
Both input datasets need to have the same number of timesteps. All variable names may only occur
once!
Example
Assume the first input dataset infile1 has three variables with the names geosp, t and tslm1 and the
second input dataset infile2 has only the variable tslm1. To replace the variable tslm1 in infile1
by tslm1 from infile2 use:
cdo replace infile1 infile2 outfile
Synopsis
Description
This operator duplicates the contents of infile and writes the result to outfile. The optional
parameter sets the number of duplicates, the default is 2.
Parameter
ndup INTEGER Number of duplicates, default is 2.
Synopsis
Description
Merges grid points of all variables from infile2 to infile1 and write the result to outfile. Only
the non missing values of infile2 will be used. The horizontal grid of infile2 should be smaller
or equal to the grid of infile1 and the resolution must be the same. Only rectilinear grids are
supported. Both input files need to have the same variables and the same number of timesteps.
43
File operations Reference manual
Synopsis
Description
This module reads datasets from several input files, merges them and writes the resulting dataset to
outfile.
Operators
Environment
SKIP_SAME_TIME If set to 1, skips all consecutive timesteps with a double entry of the same
timestamp.
Note
Operators of this module need to open all input files simultaneously. The maximum number of open
files depends on the operating system!
Example
Assume three datasets with the same number of timesteps and different variables in each dataset. To
merge these datasets to a new dataset use:
cdo merge infile1 infile2 infile3 outfile
Assume you split a 6 hourly dataset with splithour. This produces four datasets, one for each hour.
The following command merges them together:
cdo mergetime infile1 infile2 infile3 infile4 outfile
44
Reference manual File operations
Synopsis
Description
This module splits infile into pieces. The output files will be named <obase><xxx><suffix>
where suffix is the filename extension derived from the file format. xxx and the contents of the output
files depends on the chosen operator. params is a comma-separated list of processing parameters.
Operators
Parameter
swap STRING Swap the position of obase and xxx in the output filename
uuid=<attname> STRING Add a UUID as global attribute <attname> to each output file
Environment
CDO_FILE_SUFFIX Set the default file suffix. This suffix will be added to the output file names
instead of the filename extension derived from the file format. Set this variable
to NULL to disable the adding of a file suffix.
Note
Operators of this module need to open all output files simultaneously. The maximum number of open
files depends on the operating system!
45
File operations Reference manual
Example
Assume an input GRIB1 dataset with three variables, e.g. code number 129, 130 and 139. To split
this dataset into three pieces, one for each code number use:
cdo splitcode infile code
46
Reference manual File operations
Synopsis
Description
This module splits infile into timesteps pieces. The output files will be named <obase><xxx><suffix>
where suffix is the filename extension derived from the file format. xxx and the contents of the out-
put files depends on the chosen operator.
Operators
Parameter
format STRING C-style format for strftime() (e.g. %B for the full month name)
Environment
CDO_FILE_SUFFIX Set the default file suffix. This suffix will be added to the output file names
instead of the filename extension derived from the file format. Set this variable
to NULL to disable the adding of a file suffix.
Note
Operators of this module need to open all output files simultaneously. The maximum number of open
files depends on the operating system!
47
File operations Reference manual
Example
Assume the input GRIB1 dataset has timesteps from January to December. To split each month with
all variables into one separate file use:
cdo splitmon infile mon
Synopsis
Description
This operator splits infile into pieces, one for each adjacent sequence t_1, ...., t_n of timesteps of
the same selected time range. The output files will be named <obase><nnnnnn><suffix> where
nnnnnn is the sequence number and suffix is the filename extension derived from the file format.
Parameter
nsets INTEGER Number of input timesteps for each output file
noffset INTEGER Number of input timesteps skipped before the first timestep range (optional)
nskip INTEGER Number of input timesteps skipped between timestep ranges (optional)
Environment
CDO_FILE_SUFFIX Set the default file suffix. This suffix will be added to the output file names
instead of the filename extension derived from the file format. Set this variable
to NULL to disable the adding of a file suffix.
Synopsis
Description
This operator splits infile into pieces, one for each different date. The output files will be named
<obase><YYYY-MM-DD><suffix> where YYYY-MM-DD is the date and suffix is the filename exten-
sion derived from the file format.
Environment
CDO_FILE_SUFFIX Set the default file suffix. This suffix will be added to the output file names
instead of the filename extension derived from the file format. Set this variable
to NULL to disable the adding of a file suffix.
48
Reference manual File operations
Synopsis
Description
This operator distributes a dataset into smaller pieces. Each output file contains a different region
of the horizontal source grid. 2D Lon/Lat grids can be split into nx*ny pieces, where a target grid
region contains a structured longitude/latitude box of the source grid. Data on an unstructured grid
is split into nx pieces. The output files will be named <obase><xxx><suffix> where suffix is the
filename extension derived from the file format. xxx will have five digits with the number of the target
region.
Parameter
nx INTEGER Number of regions in x direction, or number of pieces for unstructured grids
ny INTEGER Number of regions in y direction [default: 1]
Note
This operator needs to open all output files simultaneously. The maximum number of open files
depends on the operating system!
Example
Distribute data on a 2D Lon/Lat grid into 6 smaller files, each output file receives one half of x and
a third of y of the source grid:
cdo distgrid,2,3 infile.nc obase
15 15
15 15 15 15
−45 −45
−20 20 60 −45 −45
−20 20 20 60
On the left side is the data of the input file and on the right side is the data of the six output files.
49
File operations Reference manual
Synopsis
Description
This operator collects the data of the input files to one output file. All input files need to have the
same variables and the same number of timesteps on a different horizonal grid region. If the source
regions are on a structured lon/lat grid, all regions together must result in a new structured lat/long
grid box. Data on an unstructured grid is concatenated in the order of the input files. The parameter
nx needs to be specified only for curvilinear grids.
Parameter
nx INTEGER Number of regions in x direction [default: number of input files]
names STRING Comma-separated list of variable names [default: all variables]
Note
This operator needs to open all input files simultaneously. The maximum number of open files depends
on the operating system!
Example
Collect the horizonal grid of 6 input files. Each input file contains a lon/lat region of the target grid:
cdo collgrid infile[1-6] outfile
15 15
15 15 15 15
−45 −45
−45 −45 −20 20 60
−20 20 20 60
On the left side is the data of the six input files and on the right side is the collected data of the
output file.
50
Reference manual Selection
2.3. Selection
This section contains modules to select time steps, fields or a part of a field from a dataset.
Here is a short overview of all operators in this section:
51
Selection Reference manual
Synopsis
Description
This module selects some fields from infiles and writes them to outfile. infiles is an arbitrary
number of input files. All input files need to have the same structure with the same variables on
different timesteps. The fields selected depends on the chosen parameters. Parameter is a comma-
separated list of "key=value" pairs. A range of integer values can be specified by first/last[/inc].
Wildcards are supported for string values.
Operators
Parameter
name STRING Comma-separated list of variable names.
param STRING Comma-separated list of parameter identifiers.
code INTEGER Comma-separated list or first/last[/inc] range of code numbers.
level FLOAT Comma-separated list of vertical levels.
levrange FLOAT First and last value of the level range.
levidx INTEGER Comma-separated list or first/last[/inc] range of index of levels.
zaxisname STRING Comma-separated list of zaxis names.
zaxisnum INTEGER Comma-separated list or first/last[/inc] range of zaxis numbers.
ltype INTEGER Comma-separated list or first/last[/inc] range of GRIB level types.
gridname STRING Comma-separated list of grid names.
gridnum INTEGER Comma-separated list or first/last[/inc] range of grid numbers.
steptype STRING Comma-separated list of timestep types (constant, avg, accum,
min, max, range, diff, sum)
date STRING Comma-separated list of dates (format YYYY-MM-DDThh:mm:ss).
startdate STRING Start date (format YYYY-MM-DDThh:mm:ss).
enddate STRING End date (format YYYY-MM-DDThh:mm:ss).
minute INTEGER Comma-separated list or first/last[/inc] range of minutes.
hour INTEGER Comma-separated list or first/last[/inc] range of hours.
day INTEGER Comma-separated list or first/last[/inc] range of days.
month INTEGER Comma-separated list or first/last[/inc] range of months.
season STRING Comma-separated list of seasons (substring of DJFMAMJJA-
SOND or ANN).
52
Reference manual Selection
Example
Assume you have 3 inputfiles. Each inputfile contains the same variables for a different time period.
To select the variable T,U and V on the levels 200, 500 and 850 from all 3 input files, use:
cdo select,name=T,U,V,level=200,500,850 infile1 infile2 infile3 outfile
53
Selection Reference manual
Synopsis
< operator >,selection-specification infile outfile
Description
This module selects multiple fields from infile and writes them to outfile. selection-specification
is a filename or in-place string with the selection specification. Each selection-specification has the
following compact notation format:
<type>(parameters; leveltype(s); levels)
The following descriptive notation can also be used for selection specification from a file:
SELECT/DELETE, PARAMETER=parameters, LEVTYPE=leveltye(s), LEVEL=levels
Examples:
SELECT, PARAMETER=1 , LEVTYPE=103 , LEVEL=0
SELECT, PARAMETER=33/34 , LEVTYPE=105 , LEVEL=10
SELECT, PARAMETER=11/17 , LEVTYPE=105 , LEVEL=2
SELECT, PARAMETER=71/73/74/75/61/62/65/117/67/122 , LEVTYPE=105 , LEVEL=0
DELETE, PARAMETER=128 , LEVTYPE=109 , LEVEL=∗
The following will convert Pressure from Pa into hPa; Temp from Kelvin to Celsius:
SELECT, PARAMETER=1 , LEVTYPE= 1 0 3 , LEVEL=0, SCALE=0.01
SELECT, PARAMETER=11 , LEVTYPE=105 , LEVEL=2 , OFFSET=273.15
If SCALE and/or OFFSET are defined, then the data values are scaled as SCALE*(VALUE-OFFSET).
Operators
selmulti Select multiple fields
Example
Change ECMWF GRIB code of surface pressure to Hirlam notation:
cdo changemulti,’{(134;1;*|1;105;*)}’ infile outfile
54
Reference manual Selection
Synopsis
Description
This module selects some fields from infile and writes them to outfile. The fields selected de-
pends on the chosen operator and the parameters. A range of integer values can be specified by
first/last[/inc].
Operators
55
Selection Reference manual
Parameter
parameter STRING Comma-separated list of parameter identifiers.
codes INTEGER Comma-separated list or first/last[/inc] range of code numbers.
names STRING Comma-separated list of variable names.
stdnames STRING Comma-separated list of standard names.
levels FLOAT Comma-separated list of vertical levels.
levidx INTEGER Comma-separated list or first/last[/inc] range of index of levels.
ltypes INTEGER Comma-separated list or first/last[/inc] range of GRIB level types.
grids STRING Comma-separated list of grid names or numbers.
zaxes STRING Comma-separated list of z-axis types or numbers.
zaxisnames STRING Comma-separated list of z-axis names.
tabnums INTEGER Comma-separated list or range of parameter table numbers.
Example
Assume an input dataset has three variables with the code numbers 129, 130 and 139. To select the
variables with the code number 129 and 139 use:
cdo selcode,129,139 infile outfile
You can also select the code number 129 and 139 by deleting the code number 130 with:
cdo delcode,130 infile outfile
56
Reference manual Selection
Synopsis
Description
This module selects user specified timesteps from infile and writes them to outfile. The timesteps
selected depends on the chosen operator and the parameters. A range of integer values can be specified
by first/last[/inc].
Operators
57
Selection Reference manual
Parameter
timesteps INTEGER Comma-separated list or first/last[/inc] range of timesteps. Negative val-
ues select timesteps from the end (NetCDF only).
times STRING Comma-separated list of times (format hh:mm:ss).
hours INTEGER Comma-separated list or first/last[/inc] range of hours.
days INTEGER Comma-separated list or first/last[/inc] range of days.
months INTEGER Comma-separated list or first/last[/inc] range of months.
years INTEGER Comma-separated list or first/last[/inc] range of years.
seasons STRING Comma-separated list of seasons (substring of DJFMAMJJASOND or
ANN).
startdate STRING Start date (format YYYY-MM-DDThh:mm:ss).
enddate STRING End date (format YYYY-MM-DDThh:mm:ss) [default: startdate].
nts1 INTEGER Number of timesteps before the selected month [default: 0].
nts2 INTEGER Number of timesteps after the selected month [default: nts1].
58
Reference manual Selection
Synopsis
Description
Selects grid cells inside a lon/lat or index box.
Operators
Parameter
lon1 FLOAT Western longitude in degrees
lon2 FLOAT Eastern longitude in degrees
lat1 FLOAT Southern or northern latitude in degrees
lat2 FLOAT Northern or southern latitude in degrees
idx1 INTEGER Index of first longitude (1 - nlon)
idx2 INTEGER Index of last longitude (1 - nlon)
idy1 INTEGER Index of first latitude (1 - nlat)
idy2 INTEGER Index of last latitude (1 - nlat)
Example
To select the region with the longitudes from 30W to 60E and latitudes from 30N to 80N from all
input fields use:
cdo sellonlatbox,-30,60,30,80 infile outfile
If the input dataset has fields on a Gaussian N16 grid, the same box can be selected with selindexbox
by:
cdo selindexbox,60,11,3,11 infile outfile
59
Selection Reference manual
Synopsis
Description
Selects all grid cells with the center point inside user defined regions or a circle. The resulting grid is
unstructured.
Operators
Parameter
regions STRING Comma-separated list of ASCII formatted files with different regions
lon FLOAT Longitude of the center of the circle in degrees, default lon=0.0
lat FLOAT Latitude of the center of the circle in degrees, default lat=0.0
radius STRING Radius of the circle, default radius=1deg (units: deg, rad, km, m)
Example
To select all grid cells of a country use the country code with data from the Digital Chart of the
World. Here is an example for Spain with the country code ES:
cdo selregion,dcw:ES infile outfile
60
Reference manual Selection
Synopsis
Description
The operator selects grid cells of all fields from infile. The user must specify the index of each grid
cell. The resulting grid in outfile is unstructured.
Operators
Parameter
indices INTEGER Comma-separated list or first/last[/inc] range of indices
Synopsis
Description
This is a special operator for resampling the horizontal grid. No interpolation takes place. Resample
factor=2 means every second grid point is removed. Only rectilinear and curvilinear source grids are
supported by this operator.
Parameter
factor INTEGER Resample factor, typically 2, which will half the resolution
Synopsis
Description
Selects field elements from infile2 by a yearly time index from infile1. The yearly indices in
infile1 should be the result of corresponding yearminidx and yearmaxidx operations, respectively.
61
Selection Reference manual
Synopsis
Description
This module computes a surface from all 3D variables. The result is a horizonal 2D field.
Operators
Parameter
isovalue FLOAT Isosurface value
62
Reference manual Conditional selection
ifthen If then
ifnotthen If not then
63
Conditional selection Reference manual
Synopsis
Description
This module selects field elements from infile2 with respect to infile1 and writes them to outfile.
The fields in infile1 are handled as a mask. A value not equal to zero is treated as "true", zero is
treated as "false". The number of fields in infile1 has either to be the same as in infile2 or the
same as in one timestep of infile2 or only one. The fields in outfile inherit the meta data from
infile2.
Operators
ifthen If then
i2 (t, x) if i1 (t, x) ̸= 0 ∧ i1 (t, x) ̸= miss
o(t, x) =
miss if i1 (t, x) = 0 ∨ i1 (t, x) = miss
ifnotthen If not then
i2 (t, x) if i1 (t, x) = 0 ∧ i1 (t, x) ̸= miss
o(t, x) =
miss if i1 [t, x) ̸= 0 ∨ i1 (t, x) = miss
Example
To select all field elements of infile2 if the corresponding field element of infile1 is greater than
0 use:
cdo ifthen infile1 infile2 outfile
Synopsis
Description
This operator selects field elements from infile2 or infile3 with respect to infile1 and writes
them to outfile. The fields in infile1 are handled as a mask. A value not equal to zero is treated
as "true", zero is treated as "false". The number of fields in infile1 has either to be the same as in
infile2 or the same as in one timestep of infile2 or only one. infile2 and infile3 need to have
the same number of fields. The fields in outfile inherit the meta data from infile2.
i2 (t, x) if i1 (t, x) ̸= 0 ∧ i1 (t, x) ̸= miss
o(t, x) = i3 (t, x) if i1 (t, x) = 0 ∧ i1 (t, x) ̸= miss
miss if i1 (t, x) = miss
Example
To select all field elements of infile2 if the corresponding field element of infile1 is greater than
0 and from infile3 otherwise use:
cdo ifthenelse infile1 infile2 infile3 outfile
64
Reference manual Conditional selection
Synopsis
Description
This module creates fields with a constant value or missing value. The fields in infile are handled
as a mask. A value not equal to zero is treated as "true", zero is treated as "false".
Operators
Parameter
c FLOAT Constant
Example
To create fields with the constant value 7 if the corresponding field element of infile is greater than
0 use:
cdo ifthenc,7 infile outfile
65
Conditional selection Reference manual
Synopsis
Description
This module holds an operator for data reduction based on a user defined mask. The output grid
is unstructured and includes coordinate bounds. Bounds can be avoided by using the additional
’nobounds’ keyword. With ’nocoords’ given, coordinates a completely suppressed.
Parameter
mask STRING file which holds the mask field
limitCoordsOutput STRING optional parameter to limit coordinates output: ’nobounds’ dis-
ables coordinate bounds, ’nocoords’ avoids all coordinate information
Example
To limit data fields to land values, a mask has to be created first with
cdo -gtc,0 -topo,ni96 lsm_gme96.grb
Here a GME grid is used. Say temp_gme96.grb contains a global temperture field. The following
command limits the global grid to landpoints.
cdo -f nc reduce,lsm_gme96.grb temp_gme96.grb tempOnLand_gme96.nc
Note that output file type is NetCDF, because unstructured grids cannot be stored in GRIB format.
66
Reference manual Comparison
2.5. Comparison
This section contains modules to compare datasets. The resulting field is a mask containing 1 if the
comparison is true and 0 if not.
Here is a short overview of all operators in this section:
eq Equal
ne Not equal
le Less equal
lt Less than
ge Greater equal
gt Greater than
67
Comparison Reference manual
Synopsis
Description
This module compares two datasets field by field. The resulting field is a mask containing 1 if the
comparison is true and 0 if not. The number of fields in infile1 should be the same as in infile2.
One of the input files can contain only one timestep or one field. The fields in outfile inherit the
meta data from infile1 or infile2. The type of comparison depends on the chosen operator.
Operators
eq Equal
1 if i1 (t, x) = i2 (t, x) ∧ i1 (t, x), i2 (t, x) ̸= miss
o(t, x) = 0 if i1 (t, x) ̸= i2 (t, x) ∧ i1 (t, x), i2 (t, x) ̸= miss
∨ i2 (t, x) = miss
miss if i1 (t, x) = miss
ne Not equal
1 if i1 (t, x) ̸= i2 (t, x) ∧ i1 (t, x), i2 (t, x) ̸= miss
o(t, x) = 0 if i1 (t, x) = i2 (t, x) ∧ i1 (t, x), i2 (t, x) ̸= miss
∨ i2 (t, x) = miss
miss if i1 (t, x) = miss
le Less equal
1 if i1 (t, x) ≤ i2 (t, x) ∧ i1 (t, x), i2 (t, x) ̸= miss
o(t, x) = 0 if i1 (t, x) > i2 (t, x) ∧ i1 (t, x), i2 (t, x) ̸= miss
∨ i2 (t, x) = miss
miss if i1 (t, x) = miss
lt Less than
1 if i1 (t, x) < i2 (t, x) ∧ i1 (t, x), i2 (t, x) ̸= miss
o(t, x) = 0 if i1 (t, x) ≥ i2 (t, x) ∧ i1 (t, x), i2 (t, x) ̸= miss
∨ i2 (t, x) = miss
miss if i1 (t, x) = miss
ge Greater equal
1 if i1 (t, x) ≥ i2 (t, x) ∧ i1 (t, x), i2 (t, x) ̸= miss
o(t, x) = 0 if i1 (t, x) < i2 (t, x) ∧ i1 (t, x), i2 (t, x) ̸= miss
∨ i2 (t, x) = miss
miss if i1 (t, x) = miss
gt Greater than
1 if i1 (t, x) > i2 (t, x) ∧ i1 (t, x), i2 (t, x) ̸= miss
o(t, x) = 0 if i1 (t, x) ≤ i2 (t, x) ∧ i1 (t, x), i2 (t, x) ̸= miss
∨ i2 (t, x) = miss
miss if i1 (t, x) = miss
Example
To create a mask containing 1 if the elements of two fields are the same and 0 if the elements are
different use:
cdo eq infile1 infile2 outfile
68
Reference manual Comparison
Synopsis
Description
This module compares all fields of a dataset with a constant. The resulting field is a mask containing
1 if the comparison is true and 0 if not. The type of comparison depends on the chosen operator.
Operators
Parameter
c FLOAT Constant
Example
To create a mask containing 1 if the field element is greater than 273.15 and 0 if not use:
cdo gtc,273.15 infile outfile
69
Comparison Reference manual
Synopsis
Description
This module performs compaisons of a time series and one timestep with the same month of year.
For each field in infile1 the corresponding field of the timestep in infile2 with the same month of
year is used. The resulting field is a mask containing 1 if the comparison is true and 0 if not. The
type of comparison depends on the chosen operator. The input files need to have the same structure
with the same variables. Usually infile2 is generated by an operator of the module YMONSTAT.
Operators
70
Reference manual Modification
2.6. Modification
This section contains modules to modify the metadata, fields or part of a field in a dataset.
Here is a short overview of all operators in this section:
shiftx Shift x
shifty Shift y
71
Modification Reference manual
72
Reference manual Modification
Synopsis
Description
This operator sets attributes of a dataset and writes the result to outfile. The new attributes are
only available in outfile if the file format supports attributes.
Each attribute has the following structure:
[var_nm@]att_nm[:s|d|i]=[att_val|{[var_nm@]att_nm}]
The value of var_nm is the name of the variable containing the attribute (named att_nm) that
you want to set. Use wildcards to set the attribute att_nm to more than one variable. A value
of var_nm of ’*’ will set the attribute att_nm to all data variables. If var_nm is missing then
att_nm refers to a global attribute.
The value of att_nm is the name of the attribute you want to set. For each attribute a string
(att_nm:s), a double (att_nm:d) or an integer (att_nm:i) type can be defined. By default the native
type is set.
The value of att_val is the contents of the attribute att_nm. att_val may be a single value or
one-dimensional array of elements. The type and the number of elements of an attribute will be
detected automatically from the contents of the values. An already existing attribute att_nm will
be overwritten or it will be removed if att_val is omitted. Alternatively, the values of an existing
attribute can be copied. This attribute must then be enclosed in curly brackets.
A special meaning has the attribute name FILE. If this is the 1st attribute then all attributes are
read from a file specified in the value of att_val.
Parameter
attributes STRING Comma-separated list of attributes.
Note
Attributes are evaluated by CDO when opening infile. Therefor the result of this operator is not
available for other operators when this operator is used in chaining operators.
Example
To set the units of the variable pressure to pascal use:
cdo setattribute,pressure@units=pascal infile outfile
73
Modification Reference manual
netcdf o u t f i l e {
dimensions : . . .
variables : ...
// g l o b a l a t t r i b u t e s :
: my_att = "my c o n t e n t s " ;
}
74
Reference manual Modification
Synopsis
Description
This module transforms data and metadata of infile via a parameter table and writes the result
to outfile. A parameter table is an ASCII formatted file with a set of parameter entries for each
variable. Each new set have to start with "¶meter" and to end with "/".
The following parameter table entries are supported:
Unsupported parameter table entries are stored as variable attributes. The search key for the variable
depends on the operator. Use setpartabn to search variables by the name. This is typically used for
NetCDF datasets. The operator setpartabp searches variables by the parameter ID.
Operators
Parameter
table STRING Parameter table file or name
convert STRING Converts the units if necessary
75
Modification Reference manual
Example
Here is an example of a parameter table for one variable:
prompt> cat mypartab
¶meter
name = t
out_name = ta
standard_name = air_temperature
units = "K"
missing_value = 1.0e+20
valid_min = 157.1
valid_max = 336.3
/
This command renames the variable t to ta. The standard name of this variable is set to air_temperature
and the unit is set to [K] (converts the unit if necessary). The missing value will be set to 1.0e+20.
In addition it will be checked whether the values of the variable are in the range of 157.1 to 336.3.
76
Reference manual Modification
Synopsis
Description
This module sets some field information. Depending on the chosen operator the parameter table,
code number, parameter identifier, variable name or level is set.
Operators
Parameter
table STRING Parameter table file or name
code INTEGER Code number
param STRING Parameter identifier (GRIB1: code[.tabnum]; GRIB2: num[.cat[.dis]])
name STRING Variable name
level FLOAT New level
ltype INTEGER GRIB level type
maxsteps INTEGER Maximum number of timesteps
77
Modification Reference manual
Synopsis
Description
This module sets the time axis or part of the time axis. Which part of the time axis is overwrit-
ten/created depends on the chosen operator. The number of time steps does not change.
Operators
78
Reference manual Modification
Parameter
day INTEGER Value of the new day
month INTEGER Value of the new month
year INTEGER Value of the new year
units STRING Base units of the time axis (seconds, minutes, hours, days, months, years)
date STRING Date (format: YYYY-MM-DD)
time STRING Time (format: hh:mm:ss)
inc STRING Optional increment (seconds, minutes, hours, days, months, years) [de-
fault: 1hour]
frequency STRING Frequency of the time series (hour, day, month, year)
calendar STRING Calendar (standard, proleptic_gregorian, 360_day, 365_day, 366_day)
shiftValue STRING Shift value (e.g. -3hour)
Example
To set the time axis to 1987-01-16 12:00:00 with an increment of one month for each timestep use:
cdo settaxis,1987-01-16,12:00:00,1mon infile outfile
79
Modification Reference manual
Synopsis
chcode,oldcode,newcode[,...] infile outfile
chparam,oldparam,newparam,... infile outfile
chname,oldname,newname,... infile outfile
chunit,oldunit,newunit,... infile outfile
chlevel,oldlev,newlev,... infile outfile
chlevelc,code,oldlev,newlev infile outfile
chlevelv,name,oldlev,newlev infile outfile
Description
This module reads fields from infile, changes some header values and writes the results to outfile.
The kind of changes depends on the chosen operator.
Operators
chcode Change code number
Changes some user given code numbers to new user given values.
chparam Change parameter identifier
Changes some user given parameter identifiers to new user given values.
chname Change variable or coordinate name
Changes some user given variable or coordinate names to new user given names.
chunit Change variable unit
Changes some user given variable units to new user given units.
chlevel Change level
Changes some user given levels to new user given values.
chlevelc Change level of one code
Changes one level of a user given code number.
chlevelv Change level of one variable
Changes one level of a user given variable name.
Parameter
code INTEGER Code number
oldcode,newcode,... INTEGER Pairs of old and new code numbers
oldparam,newparam,... STRING Pairs of old and new parameter identifiers
name STRING Variable name
oldname,newname,... STRING Pairs of old and new variable names
oldlev FLOAT Old level
newlev FLOAT New level
oldlev,newlev,... FLOAT Pairs of old and new levels
Example
To change the code number 98 to 179 and 99 to 211 use:
cdo chcode,98,179,99,211 infile outfile
80
Reference manual Modification
Synopsis
setgrid,grid infile outfile
setgridtype,gridtype infile outfile
setgridarea,gridarea infile outfile
setgridmask,gridmask infile outfile
Description
This module modifies the metadata of the horizontal grid. Depending on the chosen operator a new
grid description is set, the coordinates are converted or the grid cell area is added.
Operators
setgrid Set grid
Sets a new grid description. The input fields need to have the same grid size as the
size of the target grid description.
setgridtype Set grid type
Sets the grid type of all input fields. The following grid types are available:
curvilinear Converts a regular grid to a curvilinear grid
unstructured Converts a regular or curvilinear grid to an unstructured grid
dereference Dereference a reference to a grid
regular Linear interpolation of a reduced Gaussian grid to a regular Gaus-
sian grid
regularnn Nearest neighbor interpolation of a reduced Gaussian grid to a
regular Gaussian grid
lonlat Converts a regular lonlat grid stored as a curvilinear grid back to
a lonlat grid
projection Removes the geographical coordinates if projection parameter avail-
able
setgridarea Set grid cell area
Sets the grid cell area. The parameter gridarea is the path to a data file, the first
field is used as grid cell area. The input fields need to have the same grid size as the
grid cell area. The grid cell area is used to compute the weights of each grid cell if
needed by an operator, e.g. for fldmean.
setgridmask Set grid mask
Sets the grid mask. The parameter gridmask is the path to a data file, the first field
is used as the grid mask. The input fields need to have the same grid size as the
grid mask. The grid mask is used as the target grid mask for remapping, e.g. for
remapbil.
Parameter
grid STRING Grid description file or name
gridtype STRING Grid type (curvilinear, unstructured, regular, lonlat, projection or derefer-
ence)
gridarea STRING Data file, the first field is used as grid cell area
gridmask STRING Data file, the first field is used as grid mask
81
Modification Reference manual
Example
Assuming a dataset has fields on a grid with 2048 elements without or with wrong grid description.
To set the grid description of all input fields to a Gaussian N32 grid (8192 gridpoints) use:
cdo setgrid,n32 infile outfile
Synopsis
Description
This module modifies the metadata of the vertical grid.
Operators
Parameter
zaxis STRING Z-axis description file or name of the target z-axis
zbot FLOAT Specifying the bottom of the vertical column. Must have the same units as
z-axis.
ztop FLOAT Specifying the top of the vertical column. Must have the same units as z-axis.
82
Reference manual Modification
Synopsis
Description
This operator inverts the latitudes of all fields on a rectilinear grid.
Example
To invert the latitudes of a 2D field from N->S to S->N use:
cdo invertlat infile outfile
Synopsis
Description
This operator inverts the levels of all 3D variables.
83
Modification Reference manual
Synopsis
Description
This module contains operators to shift all fields in x or y direction. All fields need to have the same
horizontal rectilinear or curvilinear grid.
Operators
shiftx Shift x
Shifts all fields in x direction.
shifty Shift y
Shifts all fields in y direction.
Parameter
nshift INTEGER Number of grid cells to shift (default: 1)
cyclic STRING If set, cells are filled up cyclic (default: missing value)
coord STRING If set, coordinates are also shifted
Example
To shift all input fields in the x direction by +1 cells and fill the new cells with missing value, use:
cdo shiftx infile outfile
To shift all input fields in the x direction by +1 cells and fill the new cells cyclic, use:
cdo shiftx,1,cyclic infile outfile
84
Reference manual Modification
Synopsis
Description
Masks different regions of the input fields. The grid cells inside a region are untouched, the cells
outside are set to missing value. Considered are only those grid cells with the grid center inside the
regions. All input fields must have the same horizontal grid.
Regions can be defined by the user via an ASCII file. Each region consists of the geographic coordinates
of a convex polygon. Each line of a polygon description file contains the longitude and latitude of one
point. Each polygon description file can contain one or more polygons separated by a line with the
character &.
Predefined regions of countries can be specified via the country codes. A country is specified with
dcw:<CountryCode>. Country codes can be combined with the plus sign.
Parameter
regions STRING Comma-separated list of ASCII formatted files with different regions
Example
To mask the region with the longitudes from 120E to 90W and latitudes from 20N to 20S on all input
fields use:
cdo maskregion,myregion infile outfile
For this example the description file of the region myregion should contain one polygon with the
following four coordinates:
120 20
120 −20
270 −20
270 20
To mask the region of a country use the country code with data from the Digital Chart of the World.
Here is an example for Spain with the country code ES:
cdo maskregion,dcw:ES infile outfile
85
Modification Reference manual
Synopsis
Description
Masks grid cells inside a lon/lat or index box. The elements inside the box are untouched, the
elements outside are set to missing value. All input fields need to have the same horizontal grid. Use
sellonlatbox or selindexbox if only the data inside the box are needed.
Operators
Parameter
lon1 FLOAT Western longitude
lon2 FLOAT Eastern longitude
lat1 FLOAT Southern or northern latitude
lat2 FLOAT Northern or southern latitude
idx1 INTEGER Index of first longitude
idx2 INTEGER Index of last longitude
idy1 INTEGER Index of first latitude
idy2 INTEGER Index of last latitude
Example
To mask the region with the longitudes from 120E to 90W and latitudes from 20N to 20S on all input
fields use:
cdo masklonlatbox,120,-90,20,-20 infile outfile
If the input dataset has fields on a Gaussian N16 grid, the same box can be masked with maskindexbox
by:
cdo maskindexbox,23,48,13,20 infile outfile
86
Reference manual Modification
Synopsis
Description
Sets a box of the rectangularly understood field to a constant value. The elements outside the box
are untouched, the elements inside are set to the given constant. All input fields need to have the
same horizontal grid.
Operators
Parameter
c FLOAT Constant
lon1 FLOAT Western longitude
lon2 FLOAT Eastern longitude
lat1 FLOAT Southern or northern latitude
lat2 FLOAT Northern or southern latitude
idx1 INTEGER Index of first longitude
idx2 INTEGER Index of last longitude
idy1 INTEGER Index of first latitude
idy2 INTEGER Index of last latitude
Example
To set all values in the region with the longitudes from 120E to 90W and latitudes from 20N to 20S
to the constant value -1.23 use:
cdo setclonlatbox,-1.23,120,-90,20,-20 infile outfile
If the input dataset has fields on a Gaussian N16 grid, the same box can be set with setcindexbox by:
cdo setcindexbox,-1.23,23,48,13,20 infile outfile
87
Modification Reference manual
Synopsis
Description
Enlarge all fields of infile to a user given horizontal grid. Normally only the last field element is
used for the enlargement. If however the input and output grid are regular lon/lat grids, a zonal or
meridional enlargement is possible. Zonal enlargement takes place, if the xsize of the input field is 1
and the ysize of both grids are the same. For meridional enlargement the ysize have to be 1 and the
xsize of both grids should have the same size.
Parameter
grid STRING Target grid description file or name
Example
Assumed you want to add two datasets. The first dataset is a field on a global grid (n field elements)
and the second dataset is a global mean (1 field element). Before you can add these two datasets the
second dataset have to be enlarged to the grid size of the first dataset:
cdo enlarge,infile1 infile2 tmpfile
cdo add infile1 tmpfile outfile
88
Reference manual Modification
Synopsis
setmissval,newmiss infile outfile
setctomiss,c infile outfile
setmisstoc,c infile outfile
setrtomiss,rmin,rmax infile outfile
setvrange,rmin,rmax infile outfile
setmisstonn infile outfile
setmisstodis[,neighbors] infile outfile
Description
This module sets part of a field to missing value or missing values to a constant value. Which part of
the field is set depends on the chosen operator.
Operators
setmissval Set a newmissing value
newmiss if i(t, x) = miss
o(t, x) =
i(t, x) if i(t, x) ̸= miss
setctomiss Set constant
to missing value
miss if i(t, x) = c
o(t, x) =
i(t, x) if i(t, x) ̸= c
setmisstoc Set missing
value to constant
c if i(t, x) = miss
o(t, x) =
i(t, x) if i(t, x) ̸= miss
setrtomiss Set rangeto missing value
miss if i(t, x) ≥ rmin ∧ i(t, x) ≤ rmax
o(t, x) =
i(t, x) if i(t, x) < rmin ∨ i(t, x) > rmax
setvrange Set valid
range
miss if i(t, x) < rmin ∨ i(t, x) > rmax
o(t, x) =
i(t, x) if i(t, x) ≥ rmin ∧ i(t, x) ≤ rmax
setmisstonn Set missing value to nearest neighbor
Set all missing values to the nearest non missing value.
i(t, y) if i(t, x) = miss ∧ i(t, y) ̸= miss
o(t, x) =
i(t, x) if i(t, x) ̸= miss
setmisstodis Set missing value to distance-weighted average
Set all missing values to the distance-weighted average of the nearest non missing
values. The default number of nearest neighbors is 4.
Parameter
neighbors INTEGER Number of nearest neighbors
newmiss FLOAT New missing value
c FLOAT Constant
rmin FLOAT Lower bound
rmax FLOAT Upper bound
89
Modification Reference manual
Example
setrtomiss
Assume an input dataset has one field with temperatures in the range from 246 to 304 Kelvin. To set
all values below 273.15 Kelvin to missing value use:
cdo setrtomiss,0,273.15 infile outfile
setmisstonn
Set all missing values to the nearest non missing value:
cdo setmisstonn infile outfile
15˚ 15˚
−15˚ −15˚
−45˚ −45˚
−20˚ 20˚ 60˚ −20˚ 20˚ 60˚
On the left side is input data with missing values in grey and on the right side the result with the
filled missing values.
90
Reference manual Modification
Synopsis
Description
This operator fills in vertical missing values. The method parameter can be used to select the filling
method. The default method=nearest fills missing values with the nearest neighbor value. Other
options are f orward and backward to fill missing values by forward or backward propagation of
values. Use the limit parameter to set the maximum number of consecutive missing values to fill and
max_gaps to set the maximum number of gaps to fill.
Parameter
method STRING Fill method [nearest|linear|forward|backward] (default: nearest)
limit INTEGER The maximum number of consecutive missing values to fill (default: all)
max_gaps INTEGER The maximum number of gaps to fill (default: all)
Synopsis
Description
This operator fills in temporally missing values. The method parameter can be used to select the
filling method. The default method=nearest fills missing values with the nearest neighbor value.
Other options are f orward and backward to fill missing values by forward or backward propagation
of values. Use the limit parameter to set the maximum number of consecutive missing values to fill
and max_gaps to set the maximum number of gaps to fill.
Parameter
method STRING Fill method [nearest|linear|forward|backward] (default: nearest)
limit INTEGER The maximum number of consecutive missing values to fill (default: all)
max_gaps INTEGER The maximum number of gaps to fill (default: all)
91
Modification Reference manual
Synopsis
Description
This operator sets the value of the selected grid cells. The grid cells can be selected by a comma-
separated list of grid cell indices or a mask. The mask is read from a data file, which may contain
only one field. If no grid cells are selected, all values are set.
Parameter
value FLOAT Value of the grid cell
cell INTEGER Comma-separated list of grid cell indices
mask STRING Name of the data file which contains the mask
92
Reference manual Arithmetic
2.7. Arithmetic
93
Arithmetic Reference manual
94
Reference manual Arithmetic
Synopsis
Description
This module arithmetically processes every timestep of the input dataset. Each individual assignment
statement have to end with a semi-colon. The special key _ALL_ is used as a template. A statement
with a template is replaced for all variable names. Unlike regular variables, temporary variables are
never written to the output stream. To define a temporary variable simply prefix the variable name
with an underscore (e.g. _varname) when the variable is declared.
The following operators are supported:
95
Arithmetic Reference manual
Coordinates:
96
Reference manual Arithmetic
Operators
Parameter
instr STRING Processing instructions (need to be ’quoted’ in most cases)
filename STRING File with processing instructions
Note
If the input stream contains duplicate entries of the same variable name then the last one is used.
97
Arithmetic Reference manual
Example
Assume an input dataset contains at least the variables ’aprl’, ’aprc’ and ’ts’. To create a new variable
’var1’ with the sum of ’aprl’ and ’aprc’ and a variable ’var2’ which convert the temperature ’ts’ from
Kelvin to Celsius use:
cdo expr,’var1=aprl+aprc;var2=ts-273.15;’ infile outfile
The same example, but the instructions are read from a file:
cdo exprf,myexpr infile outfile
98
Reference manual Arithmetic
Synopsis
Description
This module contains some standard mathematical functions. All trigonometric functions calculate
with radians.
Operators
99
Arithmetic Reference manual
Example
To calculate the square root for all field elements use:
cdo sqrt infile outfile
100
Reference manual Arithmetic
Synopsis
Description
This module performs simple arithmetic with all field elements of a dataset and a constant. The fields
in outfile inherit the meta data from infile.
Operators
Parameter
c FLOAT Constant
Example
To sum all input fields with the constant -273.15 use:
cdo addc,-273.15 infile outfile
101
Arithmetic Reference manual
Synopsis
Description
This module performs simple arithmetic of two datasets. The number of fields in infile1 should be
the same as in infile2. The fields in outfile inherit the meta data from infile1. All operators in
this module simply process one field after the other from the two input files. Neither the order of the
variables nor the date is checked. One of the input files can contain only one timestep or one variable.
Operators
Example
To sum all fields of the first input file with the corresponding fields of the second input file use:
cdo add infile1 infile2 outfile
102
Reference manual Arithmetic
Synopsis
Description
This module performs simple arithmetic of a time series and one timestep with the same day, month
and year. For each field in infile1 the corresponding field of the timestep in infile2 with the same
day, month and year is used. The input files need to have the same structure with the same variables.
Usually infile2 is generated by an operator of the module DAYSTAT.
Operators
Example
To subtract a daily time average from a time series use:
cdo daysub infile -dayavg infile outfile
103
Arithmetic Reference manual
Synopsis
Description
This module performs simple arithmetic of a time series and one timestep with the same month and
year. For each field in infile1 the corresponding field of the timestep in infile2 with the same
month and year is used. The input files need to have the same structure with the same variables.
Usually infile2 is generated by an operator of the module MONSTAT.
Operators
Example
To subtract a monthly time average from a time series use:
cdo monsub infile -monavg infile outfile
104
Reference manual Arithmetic
Synopsis
Description
This module performs simple arithmetic of a time series and one timestep with the same year. For
each field in infile1 the corresponding field of the timestep in infile2 with the same year is used.
The header information in infile1 have to be the same as in infile2. Usually infile2 is generated
by an operator of the module YEARSTAT.
Operators
Example
To subtract a yearly time average from a time series use:
cdo yearsub infile -yearavg infile outfile
105
Arithmetic Reference manual
Synopsis
Description
This module performs simple arithmetic of a time series and one timestep with the same hour and
day of year. For each field in infile1 the corresponding field of the timestep in infile2 with the
same hour and day of year is used. The input files need to have the same structure with the same
variables. Usually infile2 is generated by an operator of the module YHOURSTAT.
Operators
Example
To subtract a multi-year hourly time average from a time series use:
cdo yhoursub infile -yhouravg infile outfile
106
Reference manual Arithmetic
Synopsis
Description
This module performs simple arithmetic of a time series and one timestep with the same day of year.
For each field in infile1 the corresponding field of the timestep in infile2 with the same day of year
is used. The input files need to have the same structure with the same variables. Usually infile2 is
generated by an operator of the module YDAYSTAT.
Operators
Example
To subtract a multi-year daily time average from a time series use:
cdo ydaysub infile -ydayavg infile outfile
107
Arithmetic Reference manual
Synopsis
Description
This module performs simple arithmetic of a time series and one timestep with the same month of
year. For each field in infile1 the corresponding field of the timestep in infile2 with the same
month of year is used. The input files need to have the same structure with the same variables.
Usually infile2 is generated by an operator of the module YMONSTAT.
Operators
Example
To subtract a multi-year monthly time average from a time series use:
cdo ymonsub infile -ymonavg infile outfile
108
Reference manual Arithmetic
Synopsis
Description
This module performs simple arithmetic of a time series and one timestep with the same season. For
each field in infile1 the corresponding field of the timestep in infile2 with the same season is
used. The input files need to have the same structure with the same variables. Usually infile2 is
generated by an operator of the module YSEASSTAT.
Operators
Example
To subtract a multi-year seasonal time average from a time series use:
cdo yseassub infile -yseasavg infile outfile
109
Arithmetic Reference manual
Synopsis
Description
This module multiplies or divides each timestep of a dataset with the corresponding days per month
or days per year. The result of these functions depends on the used calendar of the input data.
Operators
Synopsis
Description
This module multiplies or divides each field element with the cosine of the latitude.
Operators
110
Reference manual Statistical values
This section contains modules to compute statistical values of datasets. In this program there is the dif-
ferent notion of "mean" and "average" to distinguish two different kinds of treatment of missing values.
While computing the mean, only the not missing values are considered to belong to the sample with the
side effect of a probably reduced sample size. Computing the average is just adding the sample members
and divide the result by the sample size. For example, the mean of 1, 2, miss and 3 is (1+2+3)/3 = 2,
whereas the average is (1+2+miss+3)/4 = miss/4 = miss. If there are no missing values in the sample,
the average and the mean are identical.
CDO is using the verification time to identify the time range for temporal statistics. The time bounds are
never used!
n
X
sum xi
i=1
n
mean resp. avg −1
X
n xi
x i=1
−1
mean resp. avg Xn n
X
weighted by wj wi xi
{wi , i = 1, ..., n} j=1 i=1
n
Variance X
n−1 (xi − x)2
var i=1
n
X
var1 (n − 1)−1 (xi − x)2
i=1
−1 −1 2
n n n n
var weighted by X X X X
wj wi xi − wj w j xj
{wi , i = 1, ..., n} j=1 i=1 j=1 j=1
v
Standard deviation u
u n
X
std tn−1 (xi − x)2
s i=1
v
u n
X
u
std1 t(n − 1)−1 (xi − x)2
i=1
v 2
u
u n −1 −1
n n n
std weighted by u X X X X
wj wi xi − wj w j xj
u
{wi , i = 1, ..., n}
t
j=1 i=1 j=1 j=1
(
x n+1 if n is odd
median 1
2
2 x 2 + x 2 +1 if n is even
n n
111
Statistical values Reference manual
Pn
Skewness i=1 (xi − x)/n
skew s3
Pn
Kurtosis i=1 (xi − x)4 /n
kurt s4
Cumulative Ranked Z ∞
2
Probability Score [H(x1 ) − cdf ({x2 . . . xn })|r ] dr
−∞
crps
112
Reference manual Statistical values
113
Statistical values Reference manual
114
Reference manual Statistical values
115
Statistical values Reference manual
116
Reference manual Statistical values
117
Statistical values Reference manual
118
Reference manual Statistical values
Synopsis
Description
The timcumsum operator calculates the cumulative sum over all timesteps. Missing values are treated
as numeric zero when summing.
Synopsis
Description
This module computes periods over all timesteps in infile where a certain property is valid. The
property can be chosen by creating a mask from the original data, which is the expected input format
for operators of this module. Depending on the operator full information about each period or just
its length and ending date are computed.
Operators
Example
For a given time series of daily temperatures, the periods of summer days can be calculated with
inplace maskting the input field:
cdo consects -gtc,20.0 infile1 outfile
119
Statistical values Reference manual
Synopsis
Description
This module computes statistical values over all variables for each timestep. Depending on the chosen
operator the minimum, maximum, range, sum, average, variance or standard deviation is written to
outfile. All input variables need to have the same gridsize and the same number of levels.
Operators
120
Reference manual Statistical values
Synopsis
< operator > infiles outfile
enspctl,p infiles outfile
Description
This module computes statistical values over an ensemble of input files. Depending on the chosen
operator, the minimum, maximum, range, sum, average, standard deviation, variance, skewness,
kurtosis, median or a certain percentile over all input files is written to outfile. All input files need
to have the same structure with the same variables. The date information of a timestep in outfile
is the date of the first input file.
Operators
ensmin Ensemble minimum
o(t, x) = min{i1 (t, x), i2 (t, x), · · · , in (t, x)}
ensmax Ensemble maximum
o(t, x) = max{i1 (t, x), i2 (t, x), · · · , in (t, x)}
ensrange Ensemble range
o(t, x) = range{i1 (t, x), i2 (t, x), · · · , in (t, x)}
enssum Ensemble sum
o(t, x) = sum{i1 (t, x), i2 (t, x), · · · , in (t, x)}
ensmean Ensemble mean
o(t, x) = mean{i1 (t, x), i2 (t, x), · · · , in (t, x)}
ensavg Ensemble average
o(t, x) = avg{i1 (t, x), i2 (t, x), · · · , in (t, x)}
ensstd Ensemble standard deviation
Normalize by n.
o(t, x) = std{i1 (t, x), i2 (t, x), · · · , in (t, x)}
ensstd1 Ensemble standard deviation (n-1)
Normalize by (n-1).
o(t, x) = std1{i1 (t, x), i2 (t, x), · · · , in (t, x)}
ensvar Ensemble variance
Normalize by n.
o(t, x) = var{i1 (t, x), i2 (t, x), · · · , in (t, x)}
ensvar1 Ensemble variance (n-1)
Normalize by (n-1).
o(t, x) = var1{i1 (t, x), i2 (t, x), · · · , in (t, x)}
ensskew Ensemble skewness
o(t, x) = skew{i1 (t, x), i2 (t, x), · · · , in (t, x)}
enskurt Ensemble kurtosis
o(t, x) = kurt{i1 (t, x), i2 (t, x), · · · , in (t, x)}
ensmedian Ensemble median
o(t, x) = median{i1 (t, x), i2 (t, x), · · · , in (t, x)}
enspctl Ensemble percentiles
o(t, x) = pth percentile{i1 (t, x), i2 (t, x), · · · , in (t, x)}
121
Statistical values Reference manual
Parameter
p FLOAT Percentile number in 0, ..., 100
Note
Operators of this module need to open all input files simultaneously. The maximum number of open
files depends on the operating system!
Example
To compute the ensemble mean over 6 input files use:
cdo ensmean infile1 infile2 infile3 infile4 infile5 infile6 outfile
122
Reference manual Statistical values
Synopsis
Description
This module computes statistical values over the ensemble of ensfiles using obsfile as a reference.
Depending on the operator a ranked Histogram or a roc-curve over all Ensembles ensfiles with
reference to obsfile is written to outfile. The date and grid information of a timestep in outfile
is the date of the first input file. Thus all input files are required to have the same structure in terms
of the gridsize, variable definitions and number of timesteps.
All Operators in this module use obsfile as the reference (for instance an observation) whereas
ensfiles are understood as an ensemble consisting of n (where n is the number of ensfiles) mem-
bers.
The operators ensrkhistspace and ensrkhisttime compute Ranked Histograms. Therefor the vertical
axis is utilized as the Histogram axis, which prohibits the use of files containing more than one level.
The histogram axis has nensfiles+1 bins with level 0 containing for each grid point the number of
observations being smaller as all ensembles and level nensfiles+1 indicating the number of observations
being larger than all ensembles.
ensrkhistspace computes a ranked histogram at each timestep reducing each horizontal grid to a 1x1
grid and keeping the time axis as in obsfile. Contrary ensrkhistspace computes a histogram at
each grid point keeping the horizontal grid for each variable and reducing the time-axis. The time
information is that from the last timestep in obsfile.
Operators
Example
To compute a rank histogram over 5 input files ensfile1-ensfile5 given an observation in obsfile
use:
cdo ensrkhisttime obsfile ensfile1 ensfile2 ensfile3 ensfile4 ensfile5 outfile
123
Statistical values Reference manual
Synopsis
Description
This module computes ensemble validation scores and their decomposition such as the Brier and
cumulative ranked probability score (CRPS). The first file is used as a reference it can be a clima-
tology, observation or reanalysis against which the skill of the ensembles given in infiles is measured.
Depending on the operator a number of output files is generated each containing the skill score and
its decomposition corresponding to the operator. The output is averaged over horizontal fields using
appropriate weights for each level and timestep in rfile.
All input files need to have the same structure with the same variables. The date information of a
timestep in outfile is the date of the first input file. The output files are named as <outfilebase>.<type>.<files
where <type> depends on the operator and <filesuffix> is determined from the output file type.
There are three output files for operator enscrps and four output files for operator ensbrs.
The CRPS and its decomposition into Reliability and the potential CRPS are calculated by an
appropriate averaging over the field members (note, that the CRPS does *not* average linearly). In
the three output files <type> has the following meaning: crps for the CRPS, reli for the reliability
and crpspot for the potential crps. The relation CRP S = CRP Spot + RELI
holds.
The Brier score of the Ensemble given by infiles with respect to the reference given in rfile and
the threshold x is calculated. In the four output files <type> has the following meaning: brs for the
Brier score wrt threshold x; brsreli for the Brier score reliability wrt threshold x; brsreso for the
Brier score resolution wrt threshold x; brsunct for the Brier score uncertainty wrt threshold x. In
analogy to the CRPS the following relation holds: BRS(x) = RELI(x) − RESO(x) + U N CT (x).
The implementation of the decomposition of the CRPS and Brier Score follows Hans Hersbach (2000):
Decomposition of the Continuous Ranked Probability Score for Ensemble Prediction Systems, in:
Weather and Forecasting (15) pp. 559-570.
The CRPS code decomposition has been verified against the CRAN - ensemble validation package
from R. Differences occur when grid-cell area is not uniform as the implementation in R does not
account for that.
Operators
Example
To compute the field averaged Brier score at x=5 over an ensemble with 5 members ensfile1-5
w.r.t. the reference rfile and write the results to files obase.brs.<suff>, obase.brsreli<suff>,
obase.brsreso<suff>, obase.brsunct<suff> where <suff> is determined from the output file
type, use
cdo ensbrs,5 rfile ensfile1 ensfile2 ensfile3 ensfile4 ensfile5 obase
124
Reference manual Statistical values
125
Statistical values Reference manual
Synopsis
Description
This module computes statistical values of all input fields. A field is a horizontal layer of a data
variable. Depending on the chosen operator, the minimum, maximum, range, sum, integral, average,
standard deviation, variance, skewness, kurtosis, median or a certain percentile of the field is written
to outfile.
Operators
126
Reference manual Statistical values
Parameter
weights BOOL weights=FALSE disables weighting by grid cell area [default: weights=TRUE]
p FLOAT Percentile number in 0, ..., 100
Example
To compute the field mean of all input fields use:
cdo fldmean infile outfile
127
Statistical values Reference manual
Synopsis
Description
This module computes zonal statistical values of the input fields. Depending on the chosen operator,
the zonal minimum, maximum, range, sum, average, standard deviation, variance, skewness, kurtosis,
median or a certain percentile of the field is written to outfile. Operators of this module require
all variables on the same regular lon/lat grid. Only the zonal mean (zonmean) can be calculated for
data on an unstructured grid if the latitude bins are defined with the optional parameter zonaldes.
Operators
128
Reference manual Statistical values
Parameter
p FLOAT Percentile number in 0, ..., 100
zonaldes STRING Description of the zonal latitude bins needed for data on an unstructured
grid. A predefined zonal description is zonal_<DY>. DY is the increment of the lati-
tudes in degrees.
Example
To compute the zonal mean of all input fields use:
cdo zonmean infile outfile
To compute the 50th meridional percentile (median) of all input fields use:
cdo zonpctl,50 infile outfile
129
Statistical values Reference manual
Synopsis
< operator > infile outfile
merpctl,p infile outfile
Description
This module computes meridional statistical values of the input fields. Depending on the chosen
operator, the meridional minimum, maximum, range, sum, average, standard deviation, variance,
skewness, kurtosis, median or a certain percentile of the field is written to outfile. Operators of this
module require all variables on the same regular lon/lat grid.
Operators
mermin Meridional minimum
For every longitude the minimum over all latitudes is computed.
mermax Meridional maximum
For every longitude the maximum over all latitudes is computed.
merrange Meridional range
For every longitude the range over all latitudes is computed.
mersum Meridional sum
For every longitude the sum over all latitudes is computed.
mermean Meridional mean
For every longitude the area weighted mean over all latitudes is computed.
meravg Meridional average
For every longitude the area weighted average over all latitudes is computed.
merstd Meridional standard deviation
For every longitude the standard deviation over all latitudes is computed. Normalize
by n.
merstd1 Meridional standard deviation (n-1)
For every longitude the standard deviation over all latitudes is computed. Normalize
by (n-1).
mervar Meridional variance
For every longitude the variance over all latitudes is computed. Normalize by n.
mervar1 Meridional variance (n-1)
For every longitude the variance over all latitudes is computed. Normalize by (n-1).
merskew Meridional skewness
For every longitude the skewness over all latitudes is computed.
merkurt Meridional kurtosis
For every longitude the kurtosis over all latitudes is computed.
mermedian Meridional median
For every longitude the median over all latitudes is computed.
merpctl Meridional percentiles
For every longitude the pth percentile over all latitudes is computed.
Parameter
p FLOAT Percentile number in 0, ..., 100
130
Reference manual Statistical values
Example
To compute the meridional mean of all input fields use:
cdo mermean infile outfile
To compute the 50th meridional percentile (median) of all input fields use:
cdo merpctl,50 infile outfile
131
Statistical values Reference manual
Synopsis
< operator >,nx,ny infile outfile
Description
This module computes statistical values over surrounding grid boxes. Depending on the chosen
operator, the minimum, maximum, range, sum, average, standard deviation, variance, skewness,
kurtosis or median of the neighboring grid boxes is written to outfile. All gridbox operators only
work on quadrilateral curvilinear grids.
Operators
gridboxmin Gridbox minimum
Minimum value of the selected grid boxes.
gridboxmax Gridbox maximum
Maximum value of the selected grid boxes.
gridboxrange Gridbox range
Range (max-min value) of the selected grid boxes.
gridboxsum Gridbox sum
Sum of the selected grid boxes.
gridboxmean Gridbox mean
Mean of the selected grid boxes.
gridboxavg Gridbox average
Average of the selected grid boxes.
gridboxstd Gridbox standard deviation
Standard deviation of the selected grid boxes. Normalize by n.
gridboxstd1 Gridbox standard deviation (n-1)
Standard deviation of the selected grid boxes. Normalize by (n-1).
gridboxvar Gridbox variance
Variance of the selected grid boxes. Normalize by n.
gridboxvar1 Gridbox variance (n-1)
Variance of the selected grid boxes. Normalize by (n-1).
gridboxskew Gridbox skewness
Skewness of the selected grid boxes.
gridboxkurt Gridbox kurtosis
Kurtosis of the selected grid boxes.
gridboxmedian Gridbox median
Median of the selected grid boxes.
Parameter
nx INTEGER Number of grid boxes in x direction
ny INTEGER Number of grid boxes in y direction
Example
To compute the mean over 10x10 grid boxes of the input field use:
cdo gridboxmean,10,10 infile outfile
132
Reference manual Statistical values
Synopsis
Description
This module maps source points to target cells by calculating a statistical value from the source
points. Each target cell contains the statistical value from all source points within that target cell.
If there are no source points within a target cell, it gets a missing value. The target grid must be
regular lon/lat or Gaussian. Depending on the chosen operator the minimum, maximum, range, sum,
average, variance, standard deviation, skewness, kurtosis or median of source points is computed.
Operators
Parameter
grid STRING Target grid description file or name
133
Statistical values Reference manual
Example
To compute the mean over source points within the taget cells, use:
cdo remapmean,<targetgrid> infile outfile
If some of the target cells contain missing values, use the Operator setmisstonn to fill these missing
values with the nearest neighbor cell:
cdo setmisstonn -remapmean,<targetgrid> infile outfile
134
Reference manual Statistical values
Synopsis
Description
This module computes statistical values over all levels of the input variables. According to chosen
operator the vertical minimum, maximum, range, sum, average, variance or standard deviation is
written to outfile.
Operators
Parameter
weights BOOL weights=FALSE disables weighting by layer thickness [default: weights=TRUE]
Example
To compute the vertical sum of all input variables use:
cdo vertsum infile outfile
135
Statistical values Reference manual
Synopsis
Description
This module computes statistical values for a selected number of timesteps. According to the chosen
operator the minimum, maximum, range, sum, average, variance or standard deviation of the selected
timesteps is written to outfile. The time of outfile is determined by the time in the middle of
all contributing timesteps of infile. This can be change with the CDO option --timestat_date
<first|middle|last>.
Operators
136
Reference manual Statistical values
Parameter
nsets INTEGER Number of input timesteps for each output timestep
noffset INTEGER Number of input timesteps skipped before the first timestep range (optional)
nskip INTEGER Number of input timesteps skipped between timestep ranges (optional)
Example
Assume an input dataset has monthly means over several years. To compute seasonal means from
monthly means the first two month have to be skipped:
cdo timselmean,3,2 infile outfile
Synopsis
Description
This operator computes percentile values over a selected number of timesteps in infile1. The
algorithm uses histograms with minimum and maximum bounds given in infile2 and infile3,
respectively. The default number of histogram bins is 101. The default can be overridden by setting
the environment variable CDO_PCTL_NBINS to a different value. The files infile2 and infile3 should
be the result of corresponding timselmin and timselmax operations, respectively. The time of outfile
is determined by the time in the middle of all contributing timesteps of infile1. This can be change
with the CDO option --timestat_date <first|middle|last>.
For every adjacent sequence t_1, ..., t_n of timesteps of the same selected time range it is:
Parameter
p FLOAT Percentile number in 0, ..., 100
nsets INTEGER Number of input timesteps for each output timestep
noffset INTEGER Number of input timesteps skipped before the first timestep range (optional)
nskip INTEGER Number of input timesteps skipped between timestep ranges (optional)
Environment
CDO_PCTL_NBINS Sets the number of histogram bins. The default number is 101.
137
Statistical values Reference manual
Synopsis
Description
This module computes running statistical values over a selected number of timesteps. Depending on
the chosen operator the minimum, maximum, range, sum, average, variance or standard deviation
of a selected number of consecutive timesteps read from infile is written to outfile. The time of
outfile is determined by the time in the middle of all contributing timesteps of infile. This can
be change with the CDO option --timestat_date <first|middle|last>.
Operators
Parameter
nts INTEGER Number of timesteps
Environment
CDO_TIMESTAT_DATE Sets the time stamp in outfile to the "first", "middle" or "last" contributing
timestep of infile.
138
Reference manual Statistical values
Example
To compute the running mean over 9 timesteps use:
cdo runmean,9 infile outfile
Synopsis
Description
This module computes running percentiles over a selected number of timesteps in infile. The time
of outfile is determined by the time in the middle of all contributing timesteps of infile. This can
be change with the CDO option --timestat_date <first|middle|last>.
o(t + (nts − 1)/2, x) = pth percentile{i(t, x), i(t + 1, x), ..., i(t + nts − 1, x)}
Parameter
p FLOAT Percentile number in 0, ..., 100
nts INTEGER Number of timesteps
Example
To compute the running 50th percentile (median) over 9 timesteps use:
cdo runpctl,50,9 infile outfile
139
Statistical values Reference manual
Synopsis
Description
This module computes statistical values over all timesteps in infile. Depending on the chosen
operator the minimum, maximum, range, sum, average, variance or standard deviation of all timesteps
read from infile is written to outfile. The time of outfile is determined by the time in the middle
of all contributing timesteps of infile. This can be change with the CDO option --timestat_date
<first|middle|last>.
Operators
Example
To compute the mean over all input timesteps use:
cdo timmean infile outfile
140
Reference manual Statistical values
Synopsis
Description
This operator computes percentiles over all timesteps in infile1. The algorithm uses histograms
with minimum and maximum bounds given in infile2 and infile3, respectively. The default
number of histogram bins is 101. The default can be overridden by defining the environment variable
CDO_PCTL_NBINS. The files infile2 and infile3 should be the result of corresponding timmin and
timmax operations, respectively. The time of outfile is determined by the time in the middle of
all contributing timesteps of infile1. This can be change with the CDO option --timestat_date
<first|middle|last>.
Parameter
p FLOAT Percentile number in 0, ..., 100
Environment
CDO_PCTL_NBINS Sets the number of histogram bins. The default number is 101.
Example
To compute the 90th percentile over all input timesteps use:
cdo timmin infile minfile
cdo timmax infile maxfile
cdo timpctl,90 infile minfile maxfile outfile
141
Statistical values Reference manual
Synopsis
Description
This module computes statistical values over timesteps of the same hour. Depending on the chosen
operator the minimum, maximum, range, sum, average, variance or standard deviation of timesteps
of the same hour is written to outfile. The time of outfile is determined by the time in the middle
of all contributing timesteps of infile. This can be change with the CDO option --timestat_date
<first|middle|last>.
Operators
142
Reference manual Statistical values
Example
To compute the hourly mean of a time series use:
cdo hourmean infile outfile
Synopsis
Description
This operator computes percentiles over all timesteps of the same hour in infile1. The algorithm uses
histograms with minimum and maximum bounds given in infile2 and infile3, respectively. The
default number of histogram bins is 101. The default can be overridden by defining the environment
variable CDO_PCTL_NBINS. The files infile2 and infile3 should be the result of corresponding
hourmin and hourmax operations, respectively. The time of outfile is determined by the time
in the middle of all contributing timesteps of infile1. This can be change with the CDO option
--timestat_date <first|middle|last>.
For every adjacent sequence t_1, ..., t_n of timesteps of the same hour it is:
Parameter
p FLOAT Percentile number in 0, ..., 100
Environment
CDO_PCTL_NBINS Sets the number of histogram bins. The default number is 101.
Example
To compute the hourly 90th percentile of a time series use:
cdo hourmin infile minfile
cdo hourmax infile maxfile
cdo hourpctl,90 infile minfile maxfile outfile
143
Statistical values Reference manual
Synopsis
Description
This module computes statistical values over timesteps of the same day. Depending on the chosen
operator the minimum, maximum, range, sum, average, variance or standard deviation of timesteps
of the same day is written to outfile. The time of outfile is determined by the time in the middle
of all contributing timesteps of infile. This can be change with the CDO option --timestat_date
<first|middle|last>.
Operators
144
Reference manual Statistical values
Example
To compute the daily mean of a time series use:
cdo daymean infile outfile
Synopsis
Description
This operator computes percentiles over all timesteps of the same day in infile1. The algorithm uses
histograms with minimum and maximum bounds given in infile2 and infile3, respectively. The
default number of histogram bins is 101. The default can be overridden by defining the environment
variable CDO_PCTL_NBINS. The files infile2 and infile3 should be the result of corresponding
daymin and daymax operations, respectively. The time of outfile is determined by the time in
the middle of all contributing timesteps of infile1. This can be change with the CDO option
--timestat_date <first|middle|last>.
For every adjacent sequence t_1, ..., t_n of timesteps of the same day it is:
Parameter
p FLOAT Percentile number in 0, ..., 100
Environment
CDO_PCTL_NBINS Sets the number of histogram bins. The default number is 101.
Example
To compute the daily 90th percentile of a time series use:
cdo daymin infile minfile
cdo daymax infile maxfile
cdo daypctl,90 infile minfile maxfile outfile
145
Statistical values Reference manual
Synopsis
Description
This module computes statistical values over timesteps of the same month. Depending on the chosen
operator the minimum, maximum, range, sum, average, variance or standard deviation of timesteps of
the same month is written to outfile. The time of outfile is determined by the time in the middle
of all contributing timesteps of infile. This can be change with the CDO option --timestat_date
<first|middle|last>.
Operators
146
Reference manual Statistical values
Example
To compute the monthly mean of a time series use:
cdo monmean infile outfile
Synopsis
Description
This operator computes percentiles over all timesteps of the same month in infile1. The algorithm
uses histograms with minimum and maximum bounds given in infile2 and infile3, respectively.
The default number of histogram bins is 101. The default can be overridden by defining the environ-
ment variable CDO_PCTL_NBINS. The files infile2 and infile3 should be the result of corresponding
monmin and monmax operations, respectively. The time of outfile is determined by the time in
the middle of all contributing timesteps of infile1. This can be change with the CDO option
--timestat_date <first|middle|last>.
For every adjacent sequence t_1, ..., t_n of timesteps of the same month it is:
Parameter
p FLOAT Percentile number in 0, ..., 100
Environment
CDO_PCTL_NBINS Sets the number of histogram bins. The default number is 101.
Example
To compute the monthly 90th percentile of a time series use:
cdo monmin infile minfile
cdo monmax infile maxfile
cdo monpctl,90 infile minfile maxfile outfile
147
Statistical values Reference manual
Synopsis
Description
This operator computes the yearly mean of a monthly time series. Each month is weighted with the
number of days per month. The time of outfile is determined by the time in the middle of all
contributing timesteps of infile.
For every adjacent sequence t_1, ..., t_n of timesteps of the same year it is:
o(t, x) = mean{i(t′ , x), t1 < t′ ≤ tn }
Environment
CDO_TIMESTAT_DATE Sets the date information in outfile to the "first", "middle" or "last" contribut-
ing timestep of infile.
Example
To compute the yearly mean of a monthly time series use:
cdo yearmonmean infile outfile
148
Reference manual Statistical values
Synopsis
Description
This module computes statistical values over timesteps of the same year. Depending on the chosen
operator the minimum, maximum, range, sum, average, variance or standard deviation of timesteps
of the same year is written to outfile. The time of outfile is determined by the time in the middle
of all contributing timesteps of infile. This can be change with the CDO option --timestat_date
<first|middle|last>.
Operators
149
Statistical values Reference manual
Note
The operators yearmean and yearavg compute only arithmetical means!
Example
To compute the yearly mean of a time series use:
cdo yearmean infile outfile
To compute the yearly mean from the correct weighted monthly mean use:
cdo yearmonmean infile outfile
Synopsis
yearpctl,p infile1 infile2 infile3 outfile
Description
This operator computes percentiles over all timesteps of the same year in infile1. The algorithm uses
histograms with minimum and maximum bounds given in infile2 and infile3, respectively. The
default number of histogram bins is 101. The default can be overridden by defining the environment
variable CDO_PCTL_NBINS. The files infile2 and infile3 should be the result of corresponding
yearmin and yearmax operations, respectively. The time of outfile is determined by the time in
the middle of all contributing timesteps of infile1. This can be change with the CDO option
--timestat_date <first|middle|last>.
For every adjacent sequence t_1, ..., t_n of timesteps of the same year it is:
Parameter
p FLOAT Percentile number in 0, ..., 100
Environment
CDO_PCTL_NBINS Sets the number of histogram bins. The default number is 101.
Example
To compute the yearly 90th percentile of a time series use:
cdo yearmin infile minfile
cdo yearmax infile maxfile
cdo yearpctl,90 infile minfile maxfile outfile
150
Reference manual Statistical values
Synopsis
Description
This module computes statistical values over timesteps of the same season. Depending on the chosen
operator the minimum, maximum, range, sum, average, variance or standard deviation of timesteps of
the same season is written to outfile. The time of outfile is determined by the time in the middle
of all contributing timesteps of infile. This can be change with the CDO option --timestat_date
<first|middle|last>. Be careful about the first and the last output timestep, they may be incorrect
values if the seasons have incomplete timesteps.
Operators
151
Statistical values Reference manual
Example
To compute the seasonal mean of a time series use:
cdo seasmean infile outfile
Synopsis
Description
This operator computes percentiles over all timesteps in infile1 of the same season. The algorithm
uses histograms with minimum and maximum bounds given in infile2 and infile3, respectively.
The default number of histogram bins is 101. The default can be overridden by defining the environ-
ment variable CDO_PCTL_NBINS. The files infile2 and infile3 should be the result of corresponding
seasmin and seasmax operations, respectively. The time of outfile is determined by the time in
the middle of all contributing timesteps of infile1. This can be change with the CDO option --
timestat_date <first|middle|last>. Be careful about the first and the last output timestep, they may
be incorrect values if the seasons have incomplete timesteps.
For every adjacent sequence t_1, ..., t_n of timesteps of the same season it is:
Parameter
p FLOAT Percentile number in 0, ..., 100
Environment
CDO_PCTL_NBINS Sets the number of histogram bins. The default number is 101.
Example
To compute the seasonal 90th percentile of a time series use:
cdo seasmin infile minfile
cdo seasmax infile maxfile
cdo seaspctl,90 infile minfile maxfile outfile
152
Reference manual Statistical values
Synopsis
Description
This module computes statistical values of each hour and day of year. Depending on the chosen
operator the minimum, maximum, range, sum, average, variance or standard deviation of each hour
and day of year in infile is written to outfile. The date information in an output field is the date
of the last contributing input field.
Operators
153
Statistical values Reference manual
154
Reference manual Statistical values
Synopsis
Description
This module computes statistical values of each hour of day. Depending on the chosen operator
the minimum, maximum, range, sum, average, variance or standard deviation of each hour of day
in infile is written to outfile. The date information in an output field is the date of the last
contributing input field.
Operators
155
Statistical values Reference manual
156
Reference manual Statistical values
Synopsis
Description
This module computes statistical values of each day of year. Depending on the chosen operator
the minimum, maximum, range, sum, average, variance or standard deviation of each day of year
in infile is written to outfile. The date information in an output field is the date of the last
contributing input field.
Operators
157
Statistical values Reference manual
Example
To compute the daily mean over all input years use:
cdo ydaymean infile outfile
158
Reference manual Statistical values
Synopsis
Description
This operator writes a certain percentile of each day of year in infile1 to outfile. The algorithm
uses histograms with minimum and maximum bounds given in infile2 and infile3, respectively.
The default number of histogram bins is 101. The default can be overridden by setting the environment
variable CDO_PCTL_NBINS to a different value. The files infile2 and infile3 should be the result
of corresponding ydaymin and ydaymax operations, respectively. The date information in an output
field is the date of the last contributing input field.
Parameter
p FLOAT Percentile number in 0, ..., 100
Environment
CDO_PCTL_NBINS Sets the number of histogram bins. The default number is 101.
Example
To compute the daily 90th percentile over all input years use:
cdo ydaymin infile minfile
cdo ydaymax infile maxfile
cdo ydaypctl,90 infile minfile maxfile outfile
159
Statistical values Reference manual
Synopsis
Description
This module computes statistical values of each month of year. Depending on the chosen operator
the minimum, maximum, range, sum, average, variance or standard deviation of each month of year
in infile is written to outfile. The date information in an output field is the date of the last con-
tributing input field. This can be change with the CDO option --timestat_date <first|middle|last>.
Operators
160
Reference manual Statistical values
Example
To compute the monthly mean over all input years use:
cdo ymonmean infile outfile
161
Statistical values Reference manual
Synopsis
Description
This operator writes a certain percentile of each month of year in infile1 to outfile. The algorithm
uses histograms with minimum and maximum bounds given in infile2 and infile3, respectively.
The default number of histogram bins is 101. The default can be overridden by setting the environment
variable CDO_PCTL_NBINS to a different value. The files infile2 and infile3 should be the result of
corresponding ymonmin and ymonmax operations, respectively. The date information in an output
field is the date of the last contributing input field.
Parameter
p FLOAT Percentile number in 0, ..., 100
Environment
CDO_PCTL_NBINS Sets the number of histogram bins. The default number is 101.
Example
To compute the monthly 90th percentile over all input years use:
cdo ymonmin infile minfile
cdo ymonmax infile maxfile
cdo ymonpctl,90 infile minfile maxfile outfile
162
Reference manual Statistical values
Synopsis
Description
This module computes statistical values of each season. Depending on the chosen operator the
minimum, maximum, range, sum, average, variance or standard deviation of each season in infile
is written to outfile. The date information in an output field is the date of the last contributing
input field.
Operators
163
Statistical values Reference manual
Example
To compute the seasonal mean over all input years use:
cdo yseasmean infile outfile
164
Reference manual Statistical values
Synopsis
Description
This operator writes a certain percentile of each season in infile1 to outfile. The algorithm uses
histograms with minimum and maximum bounds given in infile2 and infile3, respectively. The
default number of histogram bins is 101. The default can be overridden by setting the environment
variable CDO_PCTL_NBINS to a different value. The files infile2 and infile3 should be the result of
corresponding yseasmin and yseasmax operations, respectively. The date information in an output
field is the date of the last contributing input field.
Parameter
p FLOAT Percentile number in 0, ..., 100
Environment
CDO_PCTL_NBINS Sets the number of histogram bins. The default number is 101.
Example
To compute the seasonal 90th percentile over all input years use:
cdo yseasmin infile minfile
cdo yseasmax infile maxfile
cdo yseaspctl,90 infile minfile maxfile outfile
165
Statistical values Reference manual
Synopsis
Description
This module writes running statistical values for each day of year in infile to outfile. Depending
on the chosen operator, the minimum, maximum, sum, average, variance or standard deviation of all
timesteps in running windows of which the medium timestep corresponds to a certain day of year is
computed. The date information in an output field is the date of the timestep in the middle of the last
contributing running window. Note that the operator have to be applied to a continuous time series
of daily measurements in order to yield physically meaningful results. Also note that the output time
series begins (nts-1)/2 timesteps after the first timestep of the input time series and ends (nts-1)/2
timesteps before the last one. For input data which are complete but not continuous, such as time
series of daily measurements for the same month or season within different years, the operator yields
physically meaningful results only if the input time series does include the (nts-1)/2 days before and
after each period of interest.
Operators
o(001, x) = std{i(t, x), i(t + 1, x), ..., i(t + nts − 1, x); day[(i(t + (nts − 1)/2)] = 001}
..
.
o(366, x) = std{i(t, x), i(t + 1, x), ..., i(t + nts − 1, x); day[(i(t + (nts − 1)/2)] = 366}
166
Reference manual Statistical values
Parameter
nts INTEGER Number of timesteps
Example
Assume the input data provide a continuous time series of daily measurements. To compute the
running multi-year daily mean over all input timesteps for a running window of five days use:
cdo ydrunmean,5 infile outfile
Note that except for the standard deviation the results of the operators in this module are equivalent
to a composition of corresponding operators from the YDAYSTAT and RUNSTAT modules. For
instance, the above command yields the same result as:
cdo ydaymean -runmean,5 infile outfile
167
Statistical values Reference manual
Synopsis
Description
This operator writes running percentile values for each day of year in infile1 to outfile. A certain
percentile is computed for all timesteps in running windows of which the medium timestep corresponds
to a certain day of year. The algorithm uses histograms with minimum and maximum bounds given
in infile2 and infile3, respectively. The default number of histogram bins is 101. The default
can be overridden by setting the environment variable CDO_PCTL_NBINS to a different value. The files
infile2 and infile3 should be the result of corresponding ydrunmin and ydrunmax operations,
respectively. The date information in an output field is the date of the timestep in the middle of
the last contributing running window. Note that the operator have to be applied to a continuous
time series of daily measurements in order to yield physically meaningful results. Also note that the
output time series begins (nts-1)/2 timesteps after the first timestep of the input time series and ends
(nts-1)/2 timesteps before the last. For input data which are complete but not continuous, such as
time series of daily measurements for the same month or season within different years, the operator
only yields physically meaningful results if the input time series does include the (nts-1)/2 days before
and after each period of interest.
o(001, x) = pth percentile{i(t, x), i(t + 1, x), ..., i(t + nts − 1, x); day[(i(t + (nts − 1)/2)] = 001}
..
.
o(366, x) = pth percentile{i(t, x), i(t + 1, x), ..., i(t + nts − 1, x); day[(i(t + (nts − 1)/2)] = 366}
Parameter
p FLOAT Percentile number in 0, ..., 100
nts INTEGER Number of timesteps
Environment
CDO_PCTL_NBINS Sets the number of histogram bins. The default number is 101.
Example
Assume the input data provide a continuous time series of daily measurements. To compute the
running multi-year daily 90th percentile over all input timesteps for a running window of five days
use:
cdo ydrunmin,5 infile minfile
cdo ydrunmax,5 infile maxfile
cdo ydrunpctl,90,5 infile minfile maxfile outfile
168
Reference manual Correlation and co.
n
Covariance X
n−1 (xi − x)(yi − y)
covar i=1
−1 −1 −1
n n n n n n
covar weighted by X X X X X X
wj wi xi − wj wj xj yi − wj w j yj
{wi , i = 1, ..., n} j=1 i=1 j=1 j=1 j=1 j=1
169
Correlation and co. Reference manual
Synopsis
Description
The correlation coefficient is a quantity that gives the quality of a least squares fitting to the original
data. This operator correlates all gridpoints of two fields for each timestep. With
S(t) = {x, i1 (t, x) ̸= missval ∧ i2 (t, x) ̸= missval}
it is
P P
i1 (t, x)i2 (t, x)w(x) − i1 (t, x) i2 (t, x) w(x)
x∈S(t) x∈S(t)
o(t, 1) = v ! !
u
u P 2 P P 2 P
t i1 (t, x)2 w(x) − i1 (t, x) w(x) i2 (t, x)2 w(x) − i2 (t, x) w(x)
x∈S(t) x∈S(t) x∈S(t) x∈S(t)
where w(x) are the area weights obtained by the input streams. For every timestep t only those field
elements x belong to the sample, which have i1 (t, x) ̸= missval and i2 (t, x) =
̸ missval.
Synopsis
Description
The correlation coefficient is a quantity that gives the quality of a least squares fitting to the original
data. This operator correlates each gridpoint of two fields over all timesteps. With
S(x) = {t, i1 (t, x) ̸= missval ∧ i2 (t, x) ̸= missval}
it is
P
i1 (t, x)i2 (t, x) − n i1 (t, x) i2 (t, x)
t∈S(x)
o(1, x) = v ! !
u
u P 2 P 2
t i1 (t, x)2 − n i1 (t, x) i2 (t, x)2 − n i2 (t, x)
t∈S(x) t∈S(x)
For every gridpoint x only those timesteps t belong to the sample, which have i1 (t, x) ̸= missval and
i2 (t, x) ̸= missval.
170
Reference manual Correlation and co.
Synopsis
Description
This operator calculates the covariance of two fields over all gridpoints for each timestep. With
S(t) = {x, i1 (t, x) ̸= missval ∧ i2 (t, x) ̸= missval}
it is
−1 P P
w(x) i1 (t, x) w(x) i2 (t, x)
X X x∈S(t) x∈S(t)
o(t, 1) = w(x) w(x) i1 (t, x) − i2 (t, x) −
P P
w(x) w(x)
x∈S(t) x∈S(t)
x∈S(t) x∈S(t)
where w(x) are the area weights obtained by the input streams. For every timestep t only those field
elements x belong to the sample, which have i1 (t, x) ̸= missval and i2 (t, x) =
̸ missval.
Synopsis
Description
This operator calculates the covariance of two fields at each gridpoint over all timesteps. With
S(x) = {t, i1 (t, x) ̸= missval ∧ i2 (t, x) ̸= missval}
it is
X
o(1, x) = n−1 i1 (t, x) − i1 (t, x) i2 (t, x) − i2 (t, x)
t∈S(x)
For every gridpoint x only those timesteps t belong to the sample, which have i1 (t, x) ̸= missval and
i2 (t, x) ̸= missval.
171
Regression Reference manual
2.10. Regression
This sections contains modules for linear regression of time series.
Here is a short overview of all operators in this section:
regres Regression
detrend Detrend
trend Trend
172
Reference manual Regression
Synopsis
Description
The values of the input file infile are assumed to be distributed as N (a + bt, σ 2 ) with unknown a,
b and σ 2 . This operator estimates the parameter b. For every field element x only those timesteps t
belong to the sample S(x), which have i(t, x) ̸= miss. It is
! !
′
P 1
P 1
P ′
i(t, x) − #S(x) i(t , x) t − #S(x) t
t∈S(x) t′ ∈S(x) t′ ∈S(x)
o(1, x) = !2
1
t′
P P
t− #S(x)
t∈S(x) t′ ∈S(x)
It is assumed that all timesteps are equidistant, if this is not the case set the parameter equal=false.
Parameter
equal BOOL Set to false for unequal distributed timesteps (default: true)
Synopsis
Description
Every time series in infile is linearly detrended. For every field element x only those timesteps t
belong to the sample S(x), which have i(t, x) ̸= miss. It is assumed that all timesteps are equidistant,
if this is not the case set the parameter equal=false. With
1 X 1 X
a(x) = i(t, x) − b(x) t
#S(x) #S(x)
t∈S(x) t∈S(x)
and
! !
1
i(t′ , x) 1
t′
P P P
i(t, x) − #S(x) t− #S(x)
t∈S(x) t′ ∈S(x) t′ ∈S(x)
b(x) = !2
1
t′
P P
t− #S(x)
t∈S(x) t′ ∈S(x)
it is
o(t, x) = i(t, x) − (a(x) + b(x)t)
Parameter
equal BOOL Set to false for unequal distributed timesteps (default: true)
173
Regression Reference manual
Note
This operator has to keep the fields of all timesteps concurrently in the memory. If not enough
memory is available use the operators trend and subtrend.
Example
To detrend the data in infile and to store the detrended data in outfile use:
cdo detrend infile outfile
Synopsis
Description
The values of the input file infile are assumed to be distributed as N (a + bt, σ 2 ) with unknown
a, b and σ 2 . This operator estimates the parameter a and b. For every field element x only those
timesteps t belong to the sample S(x), which have i(t, x) ̸= miss. It is
1 X 1 X
o1 (1, x) = i(t, x) − b(x) t
#S(x) #S(x)
t∈S(x) t∈S(x)
and
! !
1 ′ 1 ′
P P P
i(t, x) − #S(x) i(t , x) t− #S(x) t
t∈S(x) t′ ∈S(x) t′ ∈S(x)
o2 (1, x) = !2
1
t′
P P
t− #S(x)
t∈S(x) t′ ∈S(x)
Thus the estimation for a is stored in outfile1 and that for b is stored in outfile2. To subtract the
trend from the data see operator subtrend. It is assumed that all timesteps are equidistant, if this is
not the case set the parameter equal=false.
Parameter
equal BOOL Set to false for unequal distributed timesteps (default: true)
174
Reference manual Regression
Synopsis
Description
This module is for adding or subtracting a trend computed by the operator trend.
Operators
Parameter
equal BOOL Set to false for unequal distributed timesteps (default: true)
Example
The typical call for detrending the data in infile and storing the detrended data in outfile is:
cdo trend infile afile bfile
cdo subtrend infile afile bfile outfile
175
EOFs Reference manual
2.11. EOFs
This section contains modules to compute Empirical Orthogonal Functions and - once they are computed
- their principal coefficients.
An introduction to the theory of principal component analysis as applied here can be found in:
Principal Component Analysis in Meteorology and Oceanography [Preisendorfer]
Details about calculation in the time- and spatial spaces are found in:
Statistical Analysis in Climate Research [vonStorch]
EOFs are defined as the eigen values of the scatter matrix (covariance matrix) of the data. For the sake of
simplicity, samples are regarded as time series of anomalies
(z(t)) , t ∈ {1, . . . , n}
of (column-) vectors z(t) with p entries (where p is the gridsize). Thus, using the fact, that zj (t) are
anomalies, i.e.
n
X
⟨zj ⟩ = n−1 zj (i) = 0 ∀ 1 ≤ j ≤ p
i=1
where W is the diagonal matrix containing the area weight of cell p0 in z at W(x, x).
The matrix S has a set of orthonormal eigenvectors ej , j = 1, . . . p, which are called empirical orthogonal
functions (EOFs) of the sample z. (Please note, that ej is the eigenvector of S and not the weighted
eigen-vector which would be Wej .) Let the corresponding eigenvalues be denoted λj . The vectors ej are
spatial patterns which explain a certain amount of variance of the time series z(t) that is related linearly
to λj . Thus, the spatial pattern defined by the first eigenvector (the one with the largest eigenvalue ) is the
pattern which explains a maximum possible amount of variance of the sample z(t). The orthonormality of
eigenvectors reads as
p h p
̸ k
0 if j =
X p i hp i X
W(x, x)ej (x) W(x, x)ek (x) = W(x, x)ej (x)ek (x) =
x=1 x=1
1 if j = k
If all EOFs ej with λj ̸= 0 are calculated, the data can be reconstructed from
p
X
z(t, x) = W(x, x)aj (t)ej (x)
j=1
where aj are called the principal components or principal coefficients or EOF coefficients of z. These
coefficients - as readily seen from above - are calculated as the projection of an EOF ej onto a time step
of the data sample z(t0 ) as
p h
X i hp i h√ iT h√ i
p
aj (t0 ) = W(x, x)ej (x) W(x, x)z(t0 , x) = Wz(t0 ) Wej .
x=1
176
Reference manual EOFs
Synopsis
< operator >,neof infile outfile1 outfile2
Description
This module calculates empirical orthogonal functions of the data in infile as the eigen values of
the scatter matrix (covariance matrix) S of the data sample z(t). A more detailed description can be
found above.
Please note, that the input data are assumed to be anomalies.
If operator eof is chosen, the EOFs are computed in either time or spatial space, whichever is the
fastest. If the user already knows, which computation is faster, the module can be forced to perform
a computation in time- or gridspace by using the operators eoftime or eofspatial, respectively. This
can enhance performance, especially for very long time series, where the number of timesteps is larger
than the number of grid-points. Data in infile are assumed to be anomalies. If they are not, the
behavior of this module is not well defined. After execution outfile1 will contain all eigen-values
and outfile2 the eigenvectors e_j. All EOFs and eigen-values are computed. However, only the
first neof EOFs are written to outfile2. Nonetheless, outfile1 contains all eigen-values.
Missing values are not fully supported. Support is only checked for non-changing masks of missing
values in time. Although there still will be results, they are not trustworthy, and a warning will occur.
In the latter case we suggest to replace missing values by 0 in infile.
Operators
eof Calculate EOFs in spatial or time space
Parameter
neof INTEGER Number of eigen functions
Environment
CDO_SVD_MODE Is used to choose the algorithm for eigenvalue calculation. Options are ’jacobi’
for a one-sided parallel jacobi-algorithm (only executed in parallel if -P flag
is set) and ’danielson_lanczos’ for a non-parallel d/l algorithm. The default
setting is ’jacobi’.
CDO_WEIGHT_MODE It is used to set the weight mode. The default is ’off’. Set it to ’on’ for a
weighted version.
MAX_JACOBI_ITER Is the maximum integer number of annihilation sweeps that is executed if the
jacobi-algorithm is used to compute the eigen values. The default value is 12.
FNORM_PRECISION Is the Frobenius norm of the matrix consisting of an annihilation pair of eigen-
vectors that is used to determine if the eigenvectors have reached a sufficient
level of convergence. If all annihilation-pairs of vectors have a norm below this
value, the computation is considered to have converged properly. Otherwise, a
warning will occur. The default value 1e-12.
177
EOFs Reference manual
Example
To calculate the first 40 EOFs of a data-set containing anomalies use:
cdo eof,40 infile outfile1 outfile2
If the dataset does not containt anomalies, process them first, and use:
cdo sub infile1 -timmean infile1 anom_file
cdo eof,40 anom_file outfile1 outfile2
178
Reference manual EOFs
Synopsis
Description
This module calculates the time series of the principal coefficients for given EOF (empirical orthogonal
functions) and data. Time steps in infile1 are assumed to be the EOFs, time steps in infile2 are
assumed to be the time series. Note, that this operator calculates a non weighted dot product of the
fields in infile1 and infile2. For consistency set the environment variable CDO_WEIGHT_MODE=off
when using eof or eof3d. Given a set of EOFs e_j and a time series of data z(t) with p entries for
each timestep from which e_j have been calculated, this operator calculates the time series of the
projections of data onto each EOF
p
X
oj (t) = z(t, x)ej (x)
x=1
There will be a seperate file o_j for the principal coefficients of each EOF.
As the EOFs e_j are uncorrelated, so are their principal coefficients, i.e.
n n
X 0 if j ̸= k X
oj (t)ok (t) = with oj (t) = 0∀j ∈ {1, . . . , p}.
t=1
λj if j = k t=1
There will be a separate file containing a time series of principal coefficients with time information from
infile2 for each EOF in infile1. Output files will be numbered as <obase><neof><suffix>
where neof+1 is the number of the EOF (timestep) in infile1 and suffix is the filename extension
derived from the file format.
Environment
CDO_FILE_SUFFIX Set the default file suffix. This suffix will be added to the output file names
instead of the filename extension derived from the file format. Set this variable
to NULL to disable the adding of a file suffix.
Example
To calculate principal coefficients of the first 40 EOFs of anom_file, and write them to files beginning
with obase, use:
export CDO_WEIGHT_MODE=off
cdo eof,40 anom_file eval_file eof_file
cdo eofcoeff eof_file anom_file obase
The principal coefficients of the first EOF will be in the file obase000000.nc (and so forth for higher
EOFs, nth EOF will be in obase<n-1>).
If the dataset infile does not containt anomalies, process them first, and use:
export CDO_WEIGHT_MODE=off
cdo sub infile -timmean infile anom_file
cdo eof,40 anom_file eval_file eof_file
cdo eofcoeff eof_file anom_file obase
179
Interpolation Reference manual
2.12. Interpolation
This section contains modules to interpolate datasets. There are several operators to interpolate horizontal
fields to a new grid. Some of those operators can handle only 2D fields on a regular rectangular grid. Vertical
interpolation of 3D variables is possible from hybrid model levels to height or pressure levels. Interpolation
in time is possible between time steps and years.
Here is a short overview of all operators in this section:
180
Reference manual Interpolation
Synopsis
< operator >,grid infile outfile
Description
This module contains operators for a bilinear remapping of fields between grids in spherical coordi-
nates. The interpolation is based on an adapted SCRIP library version. For a detailed description
of the interpolation method see [SCRIP]. This interpolation method only works on quadrilateral
curvilinear source grids. Below is a schematic illustration of the bilinear remapping:
The figure on the left side shows the input data on a regular lon/lat source grid and on the right side
the remapped result on an unstructured triangular target grid. The figure in the middle shows the
input data with the target grid. Grid cells with missing value are grey colored.
Operators
remapbil Bilinear interpolation
Performs a bilinear interpolation on all input fields.
genbil Generate bilinear interpolation weights
Generates bilinear interpolation weights for the first input field and writes the result
to a file. The format of this file is NetCDF following the SCRIP convention. Use the
operator remap to apply this remapping weights to a data file with the same source
grid.
Parameter
grid STRING Target grid description file or name
Environment
REMAP_EXTRAPOLATE This variable is used to switch the extrapolation feature ’on’ or ’off’. By
default the extrapolation is enabled for circular grids.
Example
Say infile contains fields on a quadrilateral curvilinear grid. To remap all fields bilinear to a Gaussian
N32 grid, type:
cdo remapbil,n32 infile outfile
181
Interpolation Reference manual
Synopsis
< operator >,grid infile outfile
Description
This module contains operators for a bicubic remapping of fields between grids in spherical coordi-
nates. The interpolation is based on an adapted SCRIP library version. For a detailed description
of the interpolation method see [SCRIP]. This interpolation method only works on quadrilateral
curvilinear source grids. Below is a schematic illustration of the bicubic remapping:
The figure on the left side shows the input data on a regular lon/lat source grid and on the right side
the remapped result on an unstructured triangular target grid. The figure in the middle shows the
input data with the target grid. Grid cells with missing value are grey colored.
Operators
remapbic Bicubic interpolation
Performs a bicubic interpolation on all input fields.
genbic Generate bicubic interpolation weights
Generates bicubic interpolation weights for the first input field and writes the result
to a file. The format of this file is NetCDF following the SCRIP convention. Use the
operator remap to apply this remapping weights to a data file with the same source
grid.
Parameter
grid STRING Target grid description file or name
Environment
REMAP_EXTRAPOLATE This variable is used to switch the extrapolation feature ’on’ or ’off’. By
default the extrapolation is enabled for circular grids.
Example
Say infile contains fields on a quadrilateral curvilinear grid. To remap all fields bicubic to a Gaussian
N32 grid, type:
cdo remapbic,n32 infile outfile
182
Reference manual Interpolation
Synopsis
Description
This module contains operators for a nearest neighbor remapping of fields between grids in spherical
coordinates. Below is a schematic illustration of the nearest neighbor remapping:
The figure on the left side shows the input data on a regular lon/lat source grid and on the right side
the remapped result on an unstructured triangular target grid. The figure in the middle shows the
input data with the target grid. Grid cells with missing value are grey colored.
Operators
Parameter
grid STRING Target grid description file or name
Environment
REMAP_EXTRAPOLATE This variable is used to switch the extrapolation feature ’on’ or ’off’. By
default the extrapolation is enabled for this remapping method.
CDO_GRIDSEARCH_RADIUS Grid search radius in degree, default 180 degree.
183
Interpolation Reference manual
Synopsis
Description
This module contains operators for an inverse distance weighted average remapping of the four nearest
neighbor values of fields between grids in spherical coordinates. The default number of 4 neighbors can
be changed with the neighbors parameter. Below is a schematic illustration of the distance weighted
average remapping:
The figure on the left side shows the input data on a regular lon/lat source grid and on the right side
the remapped result on an unstructured triangular target grid. The figure in the middle shows the
input data with the target grid. Grid cells with missing value are grey colored.
Operators
Parameter
grid STRING Target grid description file or name
neighbors INTEGER Number of nearest neighbors [default: 4]
Environment
REMAP_EXTRAPOLATE This variable is used to switch the extrapolation feature ’on’ or ’off’. By
default the extrapolation is enabled for this remapping method.
CDO_GRIDSEARCH_RADIUS Grid search radius in degree, default 180 degree.
184
Reference manual Interpolation
Synopsis
< operator >,grid infile outfile
Description
This module contains operators for a first order conservative remapping of fields between grids in
spherical coordinates. The operators in this module uses code from the YAC software package to
compute the conservative remapping weights. For a detailed description of the interpolation method
see [YAC]. The interpolation method is completely general and can be used for any grid on a sphere.
The search algorithm for the conservative remapping requires that no grid cell occurs more than once.
Below is a schematic illustration of the 1st order conservative remapping:
The figure on the left side shows the input data on a regular lon/lat source grid and on the right side
the remapped result on an unstructured triangular target grid. The figure in the middle shows the
input data with the target grid. Grid cells with missing value are grey colored.
Operators
remapcon First order conservative remapping
Performs a first order conservative remapping on all input fields.
gencon Generate 1st order conservative remap weights
Generates first order conservative remapping weights for the first input field and writes
the result to a file. The format of this file is NetCDF following the SCRIP convention.
Use the operator remap to apply this remapping weights to a data file with the same
source grid.
Parameter
grid STRING Target grid description file or name
Environment
CDO_REMAP_NORM This variable is used to choose the normalization of the conservative interpola-
tion. By default CDO_REMAP_NORM is set to ’fracarea’. ’fracarea’ uses the sum of
the non-masked source cell intersected areas to normalize each target cell field
value. This results in a reasonable flux value but the flux is not locally con-
served. The option ’destarea’ uses the total target cell area to normalize each
target cell field value. Local flux conservation is ensured, but unreasonable flux
values may result.
185
Interpolation Reference manual
REMAP_AREA_MIN This variable is used to set the minimum destination area fraction. The default
of this variable is 0.0.
Example
Say infile contains fields on a quadrilateral curvilinear grid. To remap all fields conservative to a
Gaussian N32 grid, type:
cdo remapcon,n32 infile outfile
186
Reference manual Interpolation
Synopsis
< operator >,grid infile outfile
Description
This module contains operators for a second order conservative remapping of fields between grids in
spherical coordinates. The interpolation is based on an adapted SCRIP library version. For a detailed
description of the interpolation method see [SCRIP]. The second order conservative remapping is not
available for unstructured source grids. The search algorithm for the conservative remapping requires
that no grid cell occurs more than once. Below is a schematic illustration of the 2nd order conservative
remapping:
The figure on the left side shows the input data on a regular lon/lat source grid and on the right side
the remapped result on an unstructured triangular target grid. The figure in the middle shows the
input data with the target grid. Grid cells with missing value are grey colored.
Operators
remapcon2 Second order conservative remapping
Performs a second order conservative remapping on all input fields.
gencon2 Generate 2nd order conservative remap weights
Generates second order conservative remapping weights for the first input field and
writes the result to a file. The format of this file is NetCDF following the SCRIP
convention. Use the operator remap to apply this remapping weights to a data file
with the same source grid.
Parameter
grid STRING Target grid description file or name
Environment
CDO_REMAP_NORM This variable is used to choose the normalization of the conservative interpola-
tion. By default CDO_REMAP_NORM is set to ’fracarea’. ’fracarea’ uses the sum of
the non-masked source cell intersected areas to normalize each target cell field
value. This results in a reasonable flux value but the flux is not locally con-
served. The option ’destarea’ uses the total target cell area to normalize each
target cell field value. Local flux conservation is ensured, but unreasonable flux
values may result.
187
Interpolation Reference manual
REMAP_AREA_MIN This variable is used to set the minimum destination area fraction. The default
of this variable is 0.0.
Note
The SCRIP conservative remapping method doesn’t work correctly for some grid combinations.
Example
Say infile contains fields on a quadrilateral curvilinear grid. To remap all fields conservative (2nd
order) to a Gaussian N32 grid, type:
cdo remapcon2,n32 infile outfile
188
Reference manual Interpolation
Synopsis
Description
This module contains operators for a largest area fraction remapping of fields between grids in spherical
coordinates. The operators in this module uses code from the YAC software package to compute
the largest area fraction. For a detailed description of the interpolation method see [YAC]. The
interpolation method is completely general and can be used for any grid on a sphere. The search
algorithm for this remapping method requires that no grid cell occurs more than once. Below is a
schematic illustration of the largest area fraction conservative remapping:
The figure on the left side shows the input data on a regular lon/lat source grid and on the right side
the remapped result on an unstructured triangular target grid. The figure in the middle shows the
input data with the target grid. Grid cells with missing value are grey colored.
Operators
Parameter
grid STRING Target grid description file or name
Environment
REMAP_AREA_MIN This variable is used to set the minimum destination area fraction. The default
of this variable is 0.0.
189
Interpolation Reference manual
Synopsis
Description
Interpolation between different horizontal grids can be a very time-consuming process. Especially if
the data are on an unstructured and/or a large grid. In this case the interpolation process can be split
into two parts. Firstly the generation of the interpolation weights, which is the most time-consuming
part. These interpolation weights can be reused for every remapping process with the operator remap.
This operator remaps all input fields to a new horizontal grid. The remap type and the interpolation
weights of one input grid are read from a NetCDF file. More weights are computed if the input
fields are on different grids. The NetCDF file with the weights should follow the [SCRIP] convention.
Normally these weights come from a previous call to one of the genXXX operators (e.g. genbil) or
were created by the original SCRIP package.
Parameter
grid STRING Target grid description file or name
weights STRING Interpolation weights (SCRIP NetCDF file)
Environment
CDO_REMAP_NORM This variable is used to choose the normalization of the conservative
interpolation. By default CDO_REMAP_NORM is set to ’fracarea’. ’fracarea’
uses the sum of the non-masked source cell intersected areas to normalize
each target cell field value. This results in a reasonable flux value but
the flux is not locally conserved. The option ’destarea’ uses the total
target cell area to normalize each target cell field value. Local flux
conservation is ensured, but unreasonable flux values may result.
REMAP_EXTRAPOLATE This variable is used to switch the extrapolation feature ’on’ or ’off’.
By default the extrapolation is enabled for remapdis, remapnn and for
circular grids.
REMAP_AREA_MIN This variable is used to set the minimum destination area fraction. The
default of this variable is 0.0.
CDO_GRIDSEARCH_RADIUS Grid search radius in degree, default 180 degree.
Example
Say infile contains fields on a quadrilateral curvilinear grid. To remap all fields bilinear to a Gaussian
N32 grid use:
cdo genbil,n32 infile remapweights.nc
cdo remap,n32,remapweights.nc infile outfile
190
Reference manual Interpolation
Synopsis
Description
This operator interpolates between different vertical hybrid levels. This include the preparation of
consistent data for the free atmosphere. The procedure for the vertical interpolation is based on the
HIRLAM scheme and was adapted from [INTERA]. The vertical interpolation is based on the vertical
integration of the hydrostatic equation with few adjustments. The basic tasks are the following one:
• at first integration of hydrostatic equation
• extrapolation of surface pressure
• Planetary Boundary-Layer (PBL) proutfile interpolation
• interpolation in free atmosphere
• merging of both proutfiles
• final surface pressure correction
The vertical interpolation corrects the surface pressure. This is simply a cut-off or an addition of
air mass. This mass correction should not influence the geostrophic velocity field in the middle
troposhere. Therefore the total mass above a given reference level is conserved. As reference level the
geopotential height of the 400 hPa level is used. Near the surface the correction can affect the vertical
structure of the PBL. Therefore the interpolation is done using the potential temperature. But in
the free atmosphere above a certain n (n=0.8 defining the top of the PBL) the interpolation is done
linearly. After the interpolation both proutfiles are merged. With the resulting temperature/pressure
correction the hydrostatic equation is integrated again and adjusted to the reference level finding the
final surface pressure correction. A more detailed description of the interpolation can be found in
[INTERA]. This operator requires all variables on the same horizontal grid.
Parameter
vct STRING File name of an ASCII dataset with the vertical coordinate table
oro STRING File name with the orography (surf. geopotential) of the target dataset (optional)
Environment
REMAPETA_PTOP Sets the minimum pressure level for condensation. Above this level the humidity
is set to the constant 1.E-6. The default value is 0 Pa.
Note
The code numbers or the variable names of the required parameter have to follow the [ECHAM]
convention.
Use the sinfo command to test if your vertical coordinate system is recognized as hybrid system.
In case remapeta complains about not finding any data on hybrid model levels you may wish to use
the setzaxis command to generate a zaxis description which conforms to the ECHAM convention. See
section "1.4 Z-axis description" for an example how to define a hybrid Z-axis.
191
Interpolation Reference manual
Example
To remap between different hybrid model level data use:
cdo remapeta,vct infile outfile
192
Reference manual Interpolation
Synopsis
ml2pl,plevels infile outfile
ml2hl,hlevels infile outfile
Description
Interpolates 3D variables on hybrid sigma pressure level to pressure or height levels. The input file
should contain the log. surface pressure or the surface pressure. To extrapolate the temperature,
the surface geopotential is also needed. It is assumed that the geopotential heights are located at
the hybrid layer interfaces. For the lowest layer of geopotential heights the surface geopotential is
required. The pressure, temperature, geopotential height, and surface geopotential are identified by
their GRIB1 code number or NetCDF CF standard name. Supported parameter tables are: WMO
standard table number 2 and ECMWF local table number 128.
Use the alias ml2plx/ml2hlx or the environment variable EXTRAPOLATE to extrapolate missing values.
This operator requires all variables on the same horizontal grid. Missing values in the input data are
not supported.
Operators
ml2pl Model to pressure level interpolation
Interpolates 3D variables on hybrid sigma pressure level to pressure level.
ml2hl Model to height level interpolation
Interpolates 3D variables on hybrid sigma pressure level to height level. The procedure is
the same as for the operator ml2pl except for the pressure levels being calculated from the
heights by: plevel = 101325 ∗ exp(hlevel/ − 7000)
Parameter
plevels FLOAT Pressure levels in pascal
hlevels FLOAT Height levels in meter
Environment
EXTRAPOLATE If set to 1 extrapolate missing values.
Note
The components of the hybrid coordinate must always be avaiable at the hybrid layer interfaces even
if the data is defined at the hybrid layer midpoints.
Example
To interpolate hybrid model level data to pressure levels of 925, 850, 500 and 200 hPa use:
cdo ml2pl,92500,85000,50000,20000 infile outfile
193
Interpolation Reference manual
Synopsis
Description
Interpolate 3D variables on hybrid sigma height coordinates to pressure levels. The input file must
contain the 3D air pressure in pascal. The air pressure is identified by the NetCDF CF standard
name air_pressure. Use the alias ap2plx or the environment variable EXTRAPOLATE to extrapolate
missing values. This operator requires all variables on the same horizontal grid.
Parameter
plevels FLOAT Comma-separated list of pressure levels in pascal
Environment
EXTRAPOLATE If set to 1 extrapolate missing values.
Note
This is a specific implementation for NetCDF files from the ICON model, it may not work with data
from other sources.
Example
To interpolate 3D variables on hybrid sigma height level to pressure levels of 925, 850, 500 and 200
hPa use:
cdo ap2pl,92500,85000,50000,20000 infile outfile
194
Reference manual Interpolation
Synopsis
Description
Interpolate 3D variables on hybrid sigma height coordinates to height levels. The input file must
contain the 3D geometric height in meter. The geometric height is identified by the NetCDF CF
standard name geometric_height_at_full_level_center. Use the alias gh2hlx or the environ-
ment variable EXTRAPOLATE to extrapolate missing values. This operator requires all variables on the
same horizontal grid.
Parameter
hlevels FLOAT Comma-separated list of height levels in meter
Environment
EXTRAPOLATE If set to 1 extrapolate missing values.
Note
This is a specific implementation for NetCDF files from the ICON model, it may not work with data
from other sources.
Example
To interpolate 3D variables on hybrid sigma height level to height levels of 20, 100, 500, 1000, 5000,
10000 and 20000 meter use:
cdo gh2hl,20,100,500,1000,5000,10000,20000 infile outfile
195
Interpolation Reference manual
Synopsis
intlevel,parameter infile outfile
Description
This operator performs a linear vertical interpolation of 3D variables. The target levels can be specified
with the level parameter or read in via a Z-axis description file.
Parameter
level FLOAT Comma-separated list of target levels
file STRING Path to a file containing a description of the Z-axis
Example
To interpolate 3D variables on height levels to a new set of height levels use:
cdo intlevel,level=10,50,100,500,1000 infile outfile
Synopsis
< operator >,tgtcoordinate infile1 infile2 outfile
Description
This operator performs a linear vertical interpolation of 3D variables fields with given 3D vertical
coordinates. infile1 contains the 3D data variables and infile2 the 3D vertical source coordinate.
The parameter tgtcoordinate is a datafile with the 3D vertical target coordinate.
Operators
intlevel3d Linear level interpolation onto a 3D vertical coordinate
Parameter
tgtcoordinate STRING filename for 3D vertical target coordinates
Example
To interpolate 3D variables from one set of 3D height levels into another one where
• infile2 contains a single 3D variable, which represents the source 3D vertical coordinate
• infile1 contains the source data, which the vertical coordinate from infile2 belongs to
• tgtcoordinate only contains the target 3D height levels
cdo intlevel3d,tgtcoordinate infile1 infile2 outfile
196
Reference manual Interpolation
Synopsis
Description
This module performs linear interpolation between timesteps. Interpolation is only performed if both
values exist. If both values are missing values, the result is also a missing value. If only one value
exists, it is taken if the time weighting is greater than or equal to 0.5. So no new value will be created
at existing time steps, if the value is missing there.
Operators
Parameter
date STRING Start date (format YYYY-MM-DD)
time STRING Start time (format hh:mm:ss)
inc STRING Optional increment (seconds, minutes, hours, days, months, years) [default:
0hour]
n INTEGER Number of timesteps from one timestep to the next
Example
Assumed a 6 hourly dataset starts at 1987-01-01 12:00:00. To interpolate this time series to a one
hourly dataset use:
cdo inttime,1987-01-01,12:00:00,1hour infile outfile
197
Interpolation Reference manual
Synopsis
Description
This operator performs linear interpolation between two years, timestep by timestep. The input
files need to have the same structure with the same variables. The output files will be named
<obase><yyyy><suffix> where yyyy will be the year and suffix is the filename extension derived
from the file format.
Parameter
years INTEGER Comma-separated list or first/last[/inc] range of years
Environment
CDO_FILE_SUFFIX Set the default file suffix. This suffix will be added to the output file names
instead of the filename extension derived from the file format. Set this variable
to NULL to disable the adding of a file suffix.
Note
This operator needs to open all output files simultaneously. The maximum number of open files
depends on the operating system!
Example
Assume there are two monthly mean datasets over a year. The first dataset has 12 timesteps for the
year 1985 and the second one for the year 1990. To interpolate the years between 1985 and 1990
month by month use:
cdo intyear,1986,1987,1988,1989 infile1 infile2 year
198
Reference manual Transformation
2.13. Transformation
This section contains modules to perform spectral transformations.
Here is a short overview of all operators in this section:
199
Transformation Reference manual
Synopsis
Description
This module transforms fields on a global regular Gaussian grid to spectral coefficients and vice
versa. The transformation is achieved by applying Fast Fourier Transformation (FFT) first and
direct Legendre Transformation afterwards in gp2sp. In sp2gp the inverse Legendre Transformation
and inverse FFT are used. Missing values are not supported.
The relationship between the spectral resolution, governed by the truncation number T, and the grid
resolution depends on the number of grid points at which the shortest wavelength field is represented.
For a grid with 2N points between the poles (so 4N grid points in total around the globe) the
relationship is:
linear grid: the shortest wavelength is represented by 2 grid points → 4N ≃ 2(TL + 1)
quadratic grid: the shortest wavelength is represented by 3 grid points → 4N ≃ 3(TQ + 1)
cubic grid: the shortest wavelength is represented by 4 grid points → 4N ≃ 4(TC + 1)
The quadratic grid is used by ECHAM and ERA15. ERA40 is using a linear Gaussian grid reflected
by the TL notation.
The following table shows the calculation of the number of latitudes and the triangular truncation
for the different grid types:
Operators
Parameter
type STRING Type of the grid: quadratic, linear, cubic (default: type=quadratic)
trunc STRING Triangular truncation
Note
To speed up the calculations, the Legendre polynoms are kept in memory. This requires a relatively
large amount of memory. This is for example 12GB for T1279 data.
200
Reference manual Transformation
Example
To transform spectral coefficients from T106 to N80 Gaussian grid use:
cdo sp2gp infile outfile
201
Transformation Reference manual
Synopsis
Description
Changed the triangular truncation of all spectral fields. This operator performs downward conversion
by cutting the resolution. Upward conversions are achieved by filling in zeros.
Parameter
trunc INTEGER New spectral resolution
Synopsis
Description
Calculate spherical harmonic coefficients of velocity potential and stream function from spherical
harmonic coefficients of relative divergence and vorticity. The divergence and vorticity need to have
the names sd and svo or code numbers 155 and 138.
202
Reference manual Transformation
Synopsis
Description
This module converts relative divergence and vorticity to U and V wind and vice versa. Divergence
and vorticity are spherical harmonic coefficients in spectral space and U and V are on a global regular
Gaussian grid. The Gaussian latitudes need to be ordered from north to south. Missing values are
not supported.
The relationship between the spectral resolution, governed by the truncation number T, and the grid
resolution depends on the number of grid points at which the shortest wavelength field is represented.
For a grid with 2N points between the poles (so 4N grid points in total around the globe) the
relationship is:
linear grid: the shortest wavelength is represented by 2 grid points → 4N ≃ 2(TL + 1)
quadratic grid: the shortest wavelength is represented by 3 grid points → 4N ≃ 3(TQ + 1)
cubic grid: the shortest wavelength is represented by 4 grid points → 4N ≃ 4(TC + 1)
The quadratic grid is used by ECHAM and ERA15. ERA40 is using a linear Gaussian grid reflected
by the TL notation.
The following table shows the calculation of the number of latitudes and the triangular truncation
for the different grid types:
Operators
Parameter
gridtype STRING Type of the grid: quadratic, linear (default: quadratic)
Note
To speed up the calculations, the Legendre polynoms are kept in memory. This requires a relatively
large amount of memory. This is for example 12GB for T1279 data.
203
Transformation Reference manual
Example
Assume a dataset has at least spherical harmonic coefficients of divergence and vorticity. To transform
the spectral divergence and vorticity to U and V wind on a Gaussian grid use:
cdo dv2uv infile outfile
204
Reference manual Transformation
Synopsis
Description
The fourier operator performs the fourier transformation or the inverse fourier transformation of
all input fields. If the number of timesteps is a power of 2 then the algorithm of the Fast Fourier
Transformation (FFT) is used.
It is
n−1
1 X
o(t, x) = √ i(t, x)eϵ2πij
n j=0
where a user given epsilon = −1 leads to the forward transformation and a user given epsilon = 1
leads to the backward transformation.
If the input stream infile consists only of complex fields, then the fields of outfile, computed by
cdo -f ext fourier,1 -fourier,-1 infile outfile
are the same than that of infile. For real input files see function retocomplex.
Parameter
epsilon INTEGER -1: forward transformation; 1: backward transformation
Note
Complex numbers can only be stored in NetCDF4 and EXTRA format.
205
Import/Export Reference manual
2.14. Import/Export
This section contains modules to import and export data files which can not read or write directly with
CDO.
Here is a short overview of all operators in this section:
206
Reference manual Import/Export
Synopsis
Description
This operator imports gridded binary data sets via a GrADS data descriptor file. The GrADS data
descriptor file contains a complete description of the binary data as well as instructions on where to
find the data and how to read it. The descriptor file is an ASCII file that can be created easily with
a text editor. The general contents of a gridded data descriptor file are as follows:
• Filename for the binary data
• Missing or undefined data value
• Mapping between grid coordinates and world coordinates
• Description of variables in the binary data set
A detailed description of the components of a GrADS data descriptor file can be found in [GrADS].
Here is a list of the supported components: BYTESWAPPED, CHSUB, DSET, ENDVARS, FILE-
HEADER, HEADERBYTES, OPTIONS, TDEF, TITLE, TRAILERBYTES, UNDEF, VARS, XDEF,
XYHEADER, YDEF, ZDEF
Note
Only 32-bit IEEE floats are supported for standard binary files!
Example
To convert a binary data file to NetCDF use:
cdo -f nc import_binary infile.ctl outfile.nc
The binary data file infile.bin contains one parameter on a global 1 degree lon/lat grid written with
FORTRAN record length headers (sequential).
207
Import/Export Reference manual
Synopsis
Description
This operator imports gridded CM-SAF (Satellite Application Facility on Climate Monitoring) HDF5
files. CM-SAF exploits data from polar-orbiting and geostationary satellites in order to provide
climate monitoring products of the following parameters:
Cloud parameters: cloud fraction (CFC), cloud type (CTY), cloud phase (CPH), cloud top height,
pressure and temperature (CTH,CTP,CTT), cloud optical thickness (COT), cloud water
path (CWP).
Surface radiation components: Surface albedo (SAL); surface incoming (SIS) and net (SNS) shortwave
radiation; surface downward (SDL) and outgoing (SOL) longwave radiation, surface net
longwave radiation (SNL) and surface radiation budget (SRB).
Top-of-atmosphere radiation components: Incoming (TIS) and reflected (TRS) solar radiative flux
at top-of-atmosphere. Emitted thermal radiative flux at top-of-atmosphere (TET).
Water vapour: Vertically integrated water vapour (HTW), layered vertically integrated water vapour
and layer mean temperature and relative humidity for 5 layers (HLW), temperature and
mixing ratio at 6 pressure levels.
Daily and monthly mean products can be ordered via the CM-SAF web page (www.cmsaf.eu). Prod-
ucts with higher spatial and temporal resolution, i.e. instantaneous swath-based products, are avail-
able on request ([email protected]). All products are distributed free-of-charge. More informa-
tion on the data is available on the CM-SAF homepage (www.cmsaf.eu).
Daily and monthly mean products are provided in equal-area projections. CDO reads the projection
parameters from the metadata in the HDF5-headers in order to allow spatial operations like remap-
ping. For spatial operations with instantaneous products on original satellite projection, additional
files with arrays of latitudes and longitudes are needed. These can be obtained from CM-SAF together
with the data.
Note
To use this operator, it is necessary to build CDO with HDF5 support (version 1.6 or higher). The
PROJ library (version 5.0 or higher) is needed for full support of the remapping functionality.
Example
A typical sequence of commands with this operator could look like this:
cdo -f nc remapbil,r360x180 -import_cmsaf cmsaf_product.hdf output.nc
(bilinear remapping to a predefined global grid with 1 deg resolution and conversion to NetCDF).
If you work with CM-SAF data on original satellite project, an additional file with information on
geolocation is required, to perform such spatial operations:
cdo -f nc remapbil,r720x360 -setgrid,cmsaf_latlon.h5 -import_cmsaf cmsaf.hdf out.nc
Some CM-SAF data are stored as scaled integer values. For some operations, it could be desirable
(or necessary) to increase the accuracy of the converted products:
208
Reference manual Import/Export
Synopsis
Description
This operator imports gridded binary AMSR (Advanced Microwave Scanning Radiometer) data. The
binary data files are available from the AMSR ftp site (ftp://ftp.ssmi.com/amsre). Each file consists
of twelve (daily) or five (averaged) 0.25 x 0.25 degree grid (1440,720) byte maps. For daily files,
six daytime maps in the following order, Time (UTC), Sea Surface Temperature (SST), 10 meter
Surface Wind Speed (WSPD), Atmospheric Water Vapor (VAPOR), Cloud Liquid Water (CLOUD),
and Rain Rate (RAIN), are followed by six nighttime maps in the same order. Time-Averaged files
contain just the geophysical layers in the same order [SST, WSPD, VAPOR, CLOUD, RAIN]. More
information to the data is available on the AMSR homepage http://www.remss.com/amsr.
Example
To convert monthly binary AMSR files to NetCDF use:
cdo -f nc amsre_yyyymmv5 amsre_yyyymmv5.nc
209
Import/Export Reference manual
Synopsis
input,grid[,zaxis] outfile
inputsrv outfile
inputext outfile
Description
This module reads time series of one 2D variable from standard input. All input fields need to have
the same horizontal grid. The format of the input depends on the chosen operator.
Operators
Parameter
grid STRING Grid description file or name
zaxis STRING Z-axis description file
Example
Assume an ASCII dataset contains a field on a global regular grid with 32 longitudes and 16 latitudes
(512 elements). To create a GRIB1 dataset from the ASCII dataset use:
cdo -f grb input,r32x16 outfile.grb < my_ascii_data
210
Reference manual Import/Export
Synopsis
output infiles
outputf ,format[,nelem] infiles
outputint infiles
outputsrv infiles
outputext infiles
Description
This module prints all values of all input datasets to standard output. All input fields need to have
the same horizontal grid. All input files need to have the same structure with the same variables.
The format of the output depends on the chosen operator.
Operators
output ASCII output
Prints all values to standard output. Each row has 6 elements with the C-style format
"%13.6g".
outputf Formatted output
Prints all values to standard output. The format and number of elements for each row
have to be specified by the parameters format and nelem. The default for nelem is 1.
outputint Integer output
Prints all values rounded to the nearest integer to standard output.
outputsrv SERVICE ASCII output
Prints all values to standard output. Each field with a header of 8 integers (SERVICE
likely).
outputext EXTRA ASCII output
Prints all values to standard output. Each field with a header of 4 integers (EXTRA
likely).
Parameter
format STRING C-style format for one element (e.g. %13.6g)
nelem INTEGER Number of elements for each row (default: nelem = 1)
Example
To print all field elements of a dataset formatted with "%8.4g" and 8 values per line use:
cdo outputf,%8.4g,8 infile
211
Import/Export Reference manual
Synopsis
outputtab,parameter infiles outfile
Description
This operator prints a table of all input datasets to standard output. infiles is an arbitrary number
of input files. All input files need to have the same structure with the same variables on different
timesteps. All input fields need to have the same horizontal grid.
The contents of the table depends on the chosen parameters. The format of each table parameter is
keyname[:len]. len is the optional length of a table entry. The number of significant digits of floating
point parameters can be set with the CDO option --precision, the default is 7. Here is a list of all
valid keynames:
Parameter
parameter STRING Comma-separated list of keynames, one for each column of the table
Example
To print a table with name, date, lon, lat and value information use:
cdo outputtab,name,date,lon,lat,value infile
Here is an example output of a time series with the yearly mean temperatur at lon=10/lat=53.5:
# name date lon lat value
tsurf 1991−12−31 10 53.5 8.83903
tsurf 1992−12−31 10 53.5 8.17439
tsurf 1993−12−31 10 53.5 7.90489
tsurf 1994−12−31 10 53.5 10.0216
tsurf 1995−12−31 10 53.5 9.07798
212
Reference manual Import/Export
Synopsis
Description
This module prints the first field of the input dataset to standard output. The output can be used to
generate 2D Lon/Lat plots with [GMT]. The format of the output depends on the chosen operator.
Operators
Example
1) GMT shaded contour plot of a global temperature field with a resolution of 4 degree. The contour
interval is 3 with a rainbow color table.
80˚ 80˚
40˚ 40˚
0˚ 0˚
−40˚ −40˚
−80˚ −80˚
2) GMT shaded gridfill plot of a global temperature field with a resolution of 4 degree. The contour
interval is 3 with a rainbow color table.
213
Import/Export Reference manual
80˚ 80˚
40˚ 40˚
0˚ 0˚
−40˚ −40˚
−80˚ −80˚
214
Reference manual Miscellaneous
2.15. Miscellaneous
This section contains miscellaneous modules which do not fit to the other sections before.
Here is a short overview of all operators in this section:
215
Miscellaneous Reference manual
216
Reference manual Miscellaneous
Synopsis
gradsdes[,mapversion] infile
Description
Creates a [GrADS] data descriptor file. Supported file formats are GRIB1, NetCDF, SERVICE,
EXTRA and IEG. For GRIB1 files the GrADS map file is also generated. For SERVICE and EXTRA
files the grid have to be specified with the CDO option ’-g <grid>’. This module takes infile in
order to create filenames for the descriptor (infile.ctl) and the map (infile.gmp) file.
Parameter
mapversion INTEGER Format version of the GrADS map file for GRIB1 datasets. Use 1 for
a machine specific version 1 GrADS map file, 2 for a machine independent version 2
GrADS map file and 4 to support GRIB files >2GB. A version 2 map file can be used
only with GrADS version 1.8 or newer. A version 4 map file can be used only with
GrADS version 2.0 or newer. The default is 4 for files >2GB, otherwise 2.
Example
To create a GrADS data descriptor file from a GRIB1 dataset use:
cdo gradsdes infile.grb
This will create a descriptor file with the name infile.ctl and the map file infile.gmp.
Assumed the input GRIB1 dataset has 3 variables over 12 timesteps on a Gaussian N16 grid. The
contents of the resulting GrADS data description file is approximately:
DSET ^ i n f i l e . grb
DTYPE GRIB
INDEX ^ i n f i l e . gmp
XDEF 64 LINEAR 0 . 0 0 0 0 0 0 5 . 6 2 5 0 0 0
YDEF 32 LEVELS −85.761 −80.269 −74.745 −69.213 −63.679 −58.143
−52.607 −47.070 −41.532 −35.995 −30.458 −24.920
−19.382 −13.844 −8.307 −2.769 2.769 8.307
13.844 19.382 24.920 30.458 35.995 41.532
47.070 52.607 58.143 63.679 69.213 74.745
80.269 85.761
ZDEF 4 LEVELS 925 850 500 200
TDEF 12 LINEAR 1 2 : 0 0 Z1jan1987 1mo
TITLE i n f i l e . grb T21 g r i d
OPTIONS y r e v
UNDEF −9e+33
VARS 3
geosp 0 129 ,1 ,0 s u r f a c e g e o p o t e n t i a l ( o ro g ra ph y ) [m^2/ s ^ 2 ]
t 4 130 ,99 ,0 temperature [K]
tslm1 0 139 ,1 ,0 s u r f a c e temperature of land [K]
ENDVARS
217
Miscellaneous Reference manual
Synopsis
Description
The "afterburner" is the standard post processor for [ECHAM] GRIB and NetCDF data which provides
the following operations:
This operator reads selection parameters as namelist from stdin. Use the UNIX redirection "<namelistfile"
to read the namelist from file.
The input files can’t be combined with other CDO operators because of an optimized reader for this
operator.
Namelist
Namelist parameter and there defaults:
TYPE controls the transformation and vertical interpolation. Transforming spectral data to Gaussian
grid representation and vertical interpolation to pressure levels are performed in a chain of steps. The
TYPE parameter may be used to stop the chain at a certain step. Valid values are:
Vorticity, divergence, streamfunction and velocity potential need special treatment in the vertical
transformation. They are not available as types 30, 40 and 41. If you select one of these combinations,
type is automatically switched to the equivalent types 70, 60 and 61. The type of all other variables
will be switched too, because the type is a global parameter.
CODE selects the variables by the ECHAM GRIB1 code number (1-255). The default value -1 processes
all detected codes. Derived variables computed by the afterburner:
218
Reference manual Miscellaneous
LEVEL selects the hybrid or pressure levels. The allowed values depends on the parameter TYPE. The
default value -1 processes all detected levels.
INTERVAL selects the processing interval. The default value 0 process data on monthly intervals.
INTERVAL=1 sets the interval to daily.
MEAN=1 compute and write monthly or daily mean fields. The default value 0 writes out all timesteps.
EXTRAPOLATE=0 switch of the extrapolation of missing values during the interpolation from model to
pressure level (only available with MEAN=0 and TYPE=30). The default value 1 extrapolate missing
values.
Possible combinations of TYPE, CODE and MEAN:
Parameter
vct STRING File with VCT in ASCII format
Example
To interpolate ECHAM hybrid model level data to pressure levels of 925, 850, 500 and 200 hPa, use:
cdo after infile outfile << EON
TYPE=30 LEVEL=92500,85000,50000,20000
EON
219
Miscellaneous Reference manual
Synopsis
Description
This module takes the time series for each gridpoint in infile and (fast fourier) transforms it into the
frequency domain. According to the particular operator and its parameters certain frequencies are
filtered (set to zero) in the frequency domain and the spectrum is (inverse fast fourier) transformed
back into the time domain. To determine the frequency the time-axis of infile is used. (Data should
have a constant time increment since this assumption applies for transformation. However, the time
increment has to be different from zero.) All frequencies given as parameter are interpreted per year.
This is done by the assumption of a 365-day calendar. Consequently if you want to perform multiyear-
filtering accurately you have to delete the 29th of February. If your infile has a 360 year calendar
the frequency parameters fmin respectively fmax should be multiplied with a factor of 360/365 in
order to obtain accurate results. For the set up of a frequency filter the frequency parameters have to
be adjusted to a frequency in the data. Here fmin is rounded down and fmax is always rounded up.
Consequently it is possible to use bandpass with fmin=fmax without getting a zero-field for outfile.
Hints for efficient usage:
• to get reliable results the time-series has to be detrended (cdo detrend)
• the lowest frequency greater zero that can be contained in infile is 1/(N*dT),
• the greatest frequency is 1/(2dT) (Nyquist frequency),
with N the number of timesteps and dT the time increment of infile in years.
Missing value support for operators in this module is not implemented, yet!
Operators
Parameter
fmin FLOAT Minimum frequency per year that passes the filter.
fmax FLOAT Maximum frequency per year that passes the filter.
Note
For better performace of these operators use the CDO configure option --with-fftw3.
220
Reference manual Miscellaneous
Example
Now assume your data are still hourly for a time period of 5 years but with a 365/366-day- calendar
and you want to suppress the variability on timescales greater or equal to one year (we suggest here
to use a number x bigger than one (e.g. x=1.5) since there will be dominant frequencies around the
peak (if there is one) as well due to the issue that the time series is not of infinite length). Therefor
you can use the following:
cdo highpass,x -del29feb infile outfile
Accordingly you might use the following to suppress variability on timescales shorter than one year:
cdo lowpass,1 -del29feb infile outfile
Finally you might be interested in 2-year variability. If you want to suppress the seasonal cycle as
well as say the longer cycles in climate system you might use
cdo bandpass,x,y -del29feb infile outfile
Synopsis
Description
This module reads the grid cell area of the first grid from the input stream. If the grid cell area
is missing it will be computed from the grid coordinates. The area of a grid cell is calculated using
spherical triangles from the coordinates of the center and the vertices. The base is a unit sphere which
is scaled with the radius of the earth. The default earth radius is 6371000 meter. This value can be
changed with the environment variable PLANET_RADIUS. Depending on the chosen operator the
grid cell area or weights are written to the output stream.
Operators
Environment
PLANET_RADIUS This variable is used to scale the computed grid cell areas to square meters. By
default PLANET_RADIUS is set to an earth radius of 6371000 meter.
221
Miscellaneous Reference manual
Synopsis
Description
Smooth all grid points of a horizontal grid. Options is a comma-separated list of "key=value" pairs
with optional parameters.
Operators
Parameter
nsmooth INTEGER Number of times to smooth, default nsmooth=1
radius STRING Search radius, default radius=1deg (units: deg, rad, km, m)
maxpoints INTEGER Maximum number of points, default maxpoints=<gridsize>
form STRING Form of the curve, default form=linear
weight0 FLOAT Weight at distance 0, default weight0=0.25
weightR FLOAT Weight at the search radius, default weightR=0.25
Synopsis
Description
This operator computes the difference between each timestep.
222
Reference manual Miscellaneous
Synopsis
Description
This module replaces old variable values with new values, depending on the operator.
Operators
Parameter
oldval,newval,... FLOAT Pairs of old and new values
rmin FLOAT Lower bound
rmax FLOAT Upper bound
c FLOAT New value - inside range
c2 FLOAT New value - outside range
Synopsis
gridcellindex[,parameter] infile
Description
Get the grid cell index of one grid point selected by the parameter lon and lat.
Parameter
lon INTEGER Longitude of the grid cell in degree
lat INTEGER Latitude of the grid cell in degree
223
Miscellaneous Reference manual
Synopsis
const,const,grid outfile
random,grid[,seed] outfile
topo[,grid] outfile
seq,start,end[,inc] outfile
stdatm,levels outfile
Description
Generates a dataset with one or more fields
Operators
z
T (z) = T0 + ∆T exp − H
with the following constants
This is the solution for the hydrostatic equations and is only valid for the troposphere
(constant positive lapse rate). The temperature increase in the stratosphere and other
effects of the upper atmosphere are not taken into account.
224
Reference manual Miscellaneous
Parameter
const FLOAT Constant
seed INTEGER The seed for a new sequence of pseudo-random numbers [default: 1]
grid STRING Target grid description file or name
start FLOAT Start value of the loop
end FLOAT End value of the loop
inc FLOAT Increment of the loop [default: 1]
levels FLOAT Target levels in metre above surface
Example
To create a standard atmosphere dataset on a given horizontal grid:
cdo enlarge,gridfile -stdatm,10000,8000,5000,3000,2000,1000,500,200,0 outfile
Synopsis
Description
Sorts the elements in ascending order over all timesteps for every field position. After sorting it is:
Example
To sort all field elements of a dataset over all timesteps use:
cdo timsort infile outfile
225
Miscellaneous Reference manual
Synopsis
Description
This module contains special operators for datsets with wind components on a rotated lon/lat grid,
e.g. data from the regional model HIRLAM or REMO.
Operators
Parameter
u,v STRING Pair of u,v wind components (use variable names or code numbers)
-/+0.5,-/+0.5 STRING Destaggered grid offsets are optional (default -0.5,-0.5)
Example
Typical operator sequence on HIRLAM NWP model output (LAMH_D11 files):
cdo uvDestag,33,34 inputfile inputfile_destag
cdo rotuvNorth,33,34 inputfile_destag inputfile_rotuvN
226
Reference manual Miscellaneous
Synopsis
Description
This is a special operator for datsets with wind components on a rotated grid, e.g. data from the
regional model REMO. It performs a backward transformation of velocity components U and V from
a rotated spherical system to a geographical system.
Parameter
u,v,... STRING Pairs of zonal and meridional velocity components (use variable names or
code numbers)
Note
This is a specific implementation for data from the REMO model, it may not work with data from
other sources.
Example
To transform the u and v velocity of a dataset from a rotated spherical system to a geographical
system use:
cdo rotuvb,u,v infile outfile
Synopsis
Description
MPIOM data are on a rotated Arakawa C grid. The velocity components U and V are located
on the edges of the cells and point in the direction of the grid lines and rows. With mrotuvb the
velocity vector is rotated in latitudinal and longitudinal direction. Before the rotation, U and V
are interpolated to the scalar points (cell center). U is located with the coordinates for U in infile1
and V in infile2. mrotuvb assumes a positive meridional flow for a flow from grid point(i,j) to grid
point(i,j+1) and positive zonal flow for a flow from grid point(i+1,j) to point(i,j).
Note
This is a specific implementation for data from the MPIOM model, it may not work with data from
other sources.
227
Miscellaneous Reference manual
Synopsis
Description
This is a special operator for the post processing of the atmospheric general circulation model
[ECHAM]. It computes the mass stream function (code=272). The input dataset have to be a
zonal mean of v-velocity [m/s] (code=132) on pressure levels.
Example
To compute the mass stream function from a zonal mean v-velocity dataset use:
cdo mastrfu infile outfile
Synopsis
Description
This module contains operators that calculate derived model parameters. These are currently the
parameters sea level pressure and geopotential height. All necessary input parameters are identified
by their GRIB1 code number or the NetCDF CF standard name. Supported GRIB1 parameter tables
are: WMO standard table number 2 and ECMWF local table number 128.
Operators
228
Reference manual Miscellaneous
Synopsis
Description
Operators
Parameter
pressure FLOAT Pressure in bar (constant value assigned to all levels)
Synopsis
Description
This is a special operator for the post processing of the ocean and sea ice model [MPIOM]. It calculates
the sea water potential density (name=rhopoto; code=18). Required input fields are sea water in-situ
temperature (name=to; code=20) and sea water salinity (name=sao; code=5). Pressure is calculated
from the level information or can be specified by the optional parameter.
Parameter
pressure FLOAT Pressure in bar (constant value assigned to all levels)
Example
To compute the sea water potential density from the potential temperature use this operator in
combination with adisit:
cdo rhopot -adisit infile outfile
229
Miscellaneous Reference manual
Synopsis
Description
This module creates bins for a histogram of the input data. The bins have to be adjacent and have
non-overlapping intervals. The user has to define the bounds of the bins. The first value is the lower
bound and the second value the upper bound of the first bin. The bounds of the second bin are
defined by the second and third value, aso. Only 2-dimensional input fields are allowed. The output
file contains one vertical level for each of the bins requested.
Operators
Parameter
bounds FLOAT Comma-separated list of the bin bounds (-inf and inf valid)
Synopsis
Description
This operator sets the boundary in the east, west, south and north of the rectangular understood
fields. Positive values of the parameters increase the boundary in the selected direction. Negative
values decrease the field at the selected boundary. The new rows and columns are filled with the
missing value. With the optional parameter value a different fill value can be used. Global cyclic
fields are filled cyclically at the east and west borders, if the fill value is not set by the user.
Parameter
east INTEGER East halo
west INTEGER West halo
south INTEGER South halo
north INTEGER North halo
value FLOAT Fill value (default is the missing value)
230
Reference manual Miscellaneous
Synopsis
Description
Let infile1 and infile2 be time series of temperature and wind speed records, then a corresponding
time series of resulting windchill temperatures is written to outfile. The wind chill temperature cal-
culation is only valid for a temperature of T <= 33 ℃ and a wind speed of v >= 1.39 m/s. Whenever
these conditions are not satisfied, a missing value is written to outfile. Note that temperature and
wind speed records have to be given in units of ℃ and m/s, respectively.
2.15.21. FDNS - Frost days where no snow index per time period
Synopsis
Description
Let infile1 be a time series of the daily minimum temperature TN and infile2 be a corresponding
series of daily surface snow amounts. Then the number of days where TN < 0 ℃ and the surface snow
amount is less than 1 cm is counted. The temperature TN have to be given in units of Kelvin. The
date information of a timestep in outfile is the date of the last contributing timestep in infile.
Synopsis
Description
Let infile be a time series of the daily maximum horizontal wind speed VX, then the number of
days where VX > v is counted. The horizontal wind speed v is an optional parameter with default v
= 10.5 m/s. A further output variable is the maximum number of consecutive days with maximum
wind speed greater than or equal to v. Note that both VX and v have to be given in units of m/s.
Also note that the horizontal wind speed is defined as the square root of the sum of squares of the
zonal and meridional wind speeds. The date information of a timestep in outfile is the date of the
last contributing timestep in infile.
Parameter
v FLOAT Horizontal wind speed threshold (m/s, default v = 10.5 m/s)
231
Miscellaneous Reference manual
Synopsis
Description
Let infile be a time series of the daily maximum horizontal wind speed VX, then the number of
days where VX is greater than or equal to 10.5 m/s is counted. A further output variable is the
maximum number of consecutive days with maximum wind speed greater than or equal to 10.5 m/s.
Note that VX is defined as the square root of the sum of squares of the zonal and meridional wind
speeds and have to be given in units of m/s. The date information of a timestep in outfile is the
date of the last contributing timestep in infile.
Synopsis
Description
Let infile be a time series of the daily maximum horizontal wind speed VX, then the number of
days where VX is greater than or equal to 20.5 m/s is counted. A further output variable is the
maximum number of consecutive days with maximum wind speed greater than or equal to 20.5 m/s.
Note that VX is defined as the square root of the sum of square of the zonal and meridional wind
speeds and have to be given in units of m/s. The date information of a timestep in outfile is the
date of the last contributing timestep in infile.
Synopsis
Description
Let infile be a time series of the daily maximum horizontal wind speed VX, then the number of
days where VX is greater than or equal to 32.5 m/s is counted. A further output variable is the
maximum number of consecutive days with maximum wind speed greater than or equal to 32.5 m/s.
Note that VX is defined as the square root of the sum of squares of the zonal and meridional wind
speeds and have to be given in units of m/s. The date information of a timestep in outfile is the
date of the last contributing timestep in infile.
232
Reference manual Miscellaneous
Synopsis
Description
The [CMOR] (Climate Model Output Rewriter) library comprises a set of functions, that can be
used to produce CF-compliant NetCDF files that fulfill the requirements of many of the climate
community’s standard model experiments. These experiments are collectively referred to as MIP’s.
Much of the metadata written to the output files is defined in MIP-specific tables, typically made
available from each MIP’s web site.
The CDO operator cmorlite process the header and variable section of such MIP tables and writes
the result with the internal IO library [CDI]. In addition to the CMOR 2 and 3 table format, the
CDO parameter table format is also supported. The following parameter table entries are available:
Most of the above entries are stored as variables attributes, some of them are handled differently. The
variable name is used as a search key for the parameter table. valid_min, valid_max, ok_min_mean_abs
and ok_max_mean_abs are used to check the range of the data.
Parameter
table STRING Name of the CMOR table as specified from PCMDI
convert STRING Converts the units if necessary
Example
Here is an example of a parameter table for one variable:
prompt> cat mypartab
¶meter
name = t
233
Miscellaneous Reference manual
out_name = ta
standard_name = air_temperature
units = "K"
missing_value = 1.0e+20
valid_min = 157.1
valid_max = 336.3
/
This command renames the variable t to ta. The standard name of this variable is set to air_temperature
and the unit is set to [K] (converts the unit if necessary). The missing value will be set to 1.0e+20.
In addition it will be checked whether the values of the variable are in the range of 157.1 to 336.3.
The result will be stored in NetCDF.
Synopsis
verifygrid infile
Description
This operator verifies the coordinates of all horizontal grids found in infile. Among other things,
it searches for duplicate cells, non-convex cells, and whether the center is located outside the cell
bounds. Use the CDO option -v to output the position of these cells. This information can be useful
to avoid problems when interpolating the data.
234
Reference manual Miscellaneous
Synopsis
Description
Degrade or upgrade the resolution of a healpix grid.
Operators
Parameter
nside INTEGER The nside of the target healpix, must be a power of two [default: same as
input].
order STRING Pixel ordering of the target healpix (’nested’ or ’ring’).
power FLOAT If non-zero, divide the result by (nside[in]/nside[out])**power. power=-2 keeps
the sum of the map invariant.
235
3. Contributors
3.1. History
CDO was originally developed by Uwe Schulzweida at the Max Planck Institute for Meteorology (MPI-M).
The first public release is available since 2003. The MPI-M, together with the DKRZ, has a long history
in the development of tools for processing climate data. CDO was inspired by some of these tools, such as
the PINGO package and the GRIB-Modules.
PINGO1 was developed by Jürgen Waszkewitz, Peter Lenzen, and Nathan Gillet in 1995 at the DKRZ,
Hamburg (Germany). CDO has a similar user interface and uses some of the PINGO routines.
The GRIB-Modules was developed by Heiko Borgert and Wolfgang Welke in 1991 at the MPI-M. CDO is
using a similar module structure and also some of the routines.
afterburner is a postprocessing application for ECHAM data and ECMWF analysis data, originally de-
veloped by Edilbert Kirk, Michael Ponater and Arno Hellbach. The afterburner code was modified
for the CDO operators after, ml2pl, ml2hl, sp2gp, gp2sp.
SCRIP is a software package used to generate interpolation weights for remapping fields from one grid to
another in spherical geometry [SCRIP]. It was developed at the Los Alamos National Laboratory by
Philip W. Jones. The SCRIP library was converted from Fortran to ANSI C and is used as the base
for the remapping operators in CDO.
YAC (Yet Another Coupler) was jointly developed by DKRZ and MPI-M by Moritz Hanke and Rene
Redler [YAC]. CDO is using the clipping and cell search routines for the conservative remapping
with remapcon.
libkdtree a C99 implementation of the kd-tree algorithm developed by Jörg Dietrich.
CDO uses tools from the GNU project, including automake, and libtool.
3.3. Contributors
The primary contributors to the CDO development have been:
Uwe Schulweida : Concept, design and implementation of CDO, project coordination, and releases.
Luis Kornblueh : He supports CDO from the beginning. His main contributions are GRIB performance
and compression, GME and unstructured grid support. Luis also helps with design and planning.
Ralf Müller : He is working on CDO since 2009. His main contributions are the implementation of the
User Portal, the ruby and python interface for all CDO operators, the building process and the
Windows support. The CDO User Portal was funded by the European Commission infracstructure
project IS-ENES. Ralf also helps a lot with the user support. Implemented operators: intlevel3d,
consecsum, consects, ngrids, ngridpoints, reducegrid
1 Procedural INterface for GRIB formatted Objects
236
Contributors Contributors
Cedrick Ansorge : He worked on the software package CDO as a student assistant at MPI-M from 2007-
2011. Implemented operators: eof, eof3d, enscrps, ensbrs, maskregion, bandpass, lowpass, highpass,
smooth9
Oliver Heidmann : He worked on the software package CDO as a student assistant at MPI-M from 2015-
2018.
Karin Meier-Fleischer : She is working in the CDO user support since 2017.
Fabian Wachsmann : He is working on CDO for the CMIP6 project since 2016. His main task is the
implementation and support of the cmor operator. He has also implemented the ETCCDI Indices of
Daily Temperature and Precipitation Extremes.
Ralf Quast : He worked on CDO on behalf of the Service Gruppe Anpassung (SGA), DKRZ in 2006.
He implemented all ECA Indices of Daily Temperature and Precipitation Extremes, all percentile
operators, module YDRUNSTAT and wct.
Kameswarrao Modali : He worked on CDO from 2012-2013.
Implemented operators: contour, shaded, grfill, vector, graph.
Michal Koutek : Implemented operators: selmulti delmulti, changemulti, samplegrid, uvDestag, rotu-
vNorth, projuvLatLon.
Etienne Tourigny : Implemented operators: setclonlatbox, setcindexbox, setvals, splitsel, histfreq, setrtoc,
setrtoc2.
Karl-Hermann Wieners : Implemented operators: aexpr, aexprf, selzaxisname.
Asela Rajapakse : He worked on CDO from 2016-2017 as part of the EUDAT project.
Implemented operator: verifygrid
Estanislao Gavilan : Improved the CDO documentation for the installation section.
Many users have contributed to CDO by sending bug reports, patches and suggestions over time. Very
helpful is also the active participation in the user forum of some users. Here is an incomplete list:
Jaison-Thomas Ambadan, Harald Anlauf, Andy Aschwanden, Stefan Bauer, Simon Blessing,
Renate Brokopf, Michael Boettinger, Tim Brücher, Reinhard Budich, Martin Claus,
Traute Crüger, Brendan de Tracey, Irene Fischer-Bruns, Chris Fletscher, Helmut Frank,
Kristina Fröhlich, Oliver Fuhrer, Monika Esch, Pier Giuseppe Fogli, Beate Gayer,
Veronika Gayler, Marco Giorgetta, David Gobbett, Holger Goettel, Helmut Haak,
Stefan Hagemann, Angelika Heil, Barbara Hennemuth, Daniel Hernandez, Nathanael Huebbe,
Thomas Jahns, Frank Kaspar, Daniel Klocke, Edi Kirk, Yvonne Küstermann,
Stefanie Legutke, Leonidas Linardakis, Stephan Lorenz, Frank Lunkeit, Uwe Mikolajewicz,
Laura Niederdrenk, Dirk Notz, Hans-Jürgen Panitz, Ronny Petrik, Swantje Preuschmann,
Florian Prill, Asela Rajapakse, Daniel Reinert, Hannes Reuter, Mathis Rosenhauer,
Reiner Schnur, Martin Schultz, Dennis Shea, Kevin Sieck, Martin Stendel,
Bjorn Stevens, Martina Stockhaus, Claas Teichmann, Adrian Tompkins, Jörg Trentmann,
Álvaro M. Valdebenito, Geert Jan van Oldenborgh, Jin-Song von Storch, David Wang,
Joerg Wegner, Heiner Widmann, Claudia Wunram, Klaus Wyser
237
Bibliography
[BitInformation.jl]
M Klöwer, M Razinger, JJ Dominguez, PD Düben and TN Palmer, 2021. Compressing atmospheric
data into its real information content. Nature Computational Science 1, 713–724. 10.1038/s43588-021-
00156-2
[CDI]
Climate Data Interface, from the Max Planck Institute for Meteorologie
[CM-SAF]
Satellite Application Facility on Climate Monitoring, from the German Weather Service (Deutscher
Wetterdienst, DWD)
[CMOR]
Climate Model Output Rewriter, from the Program For Climate Model Diagnosis and Intercomparison
(PCMDI)
[ecCodes]
API for GRIB decoding/encoding, from the European Centre for Medium-Range Weather Forecasts
(ECMWF)
[ECHAM]
The atmospheric general circulation model ECHAM5, from the Max Planck Institute for Meteorologie
[GMT]
The Generic Mapping Tool, from the School of Ocean and Earth Science and Technology (SOEST)
[GrADS]
Grid Analysis and Display System, from the Center for Ocean-Land-Atmosphere Studies (COLA)
[GRIB]
GRIB version 1, from the World Meteorological Organisation (WMO)
[HDF5]
HDF version 5, from the HDF Group
[INTERA]
INTERA Software Package, from the Max Planck Institute for Meteorologie
[Magics]
Magics Software Package, from the European Centre for Medium-Range Weather Forecasts (ECMWF)
[MPIOM]
Ocean and sea ice model, from the Max Planck Institute for Meteorologie
[NetCDF]
NetCDF Software Package, from the UNIDATA Program Center of the University Corporation for
Atmospheric Research
[PINGO]
The PINGO package, from the Model & Data group at the Max Planck Institute for Meteorologie
[REMO]
Regional Model, from the Max Planck Institute for Meteorologie
[Preisendorfer]
Rudolph W. Preisendorfer: Principal Component Analysis in Meteorology and Oceanography, Elsevier
(1988)
238
Bibliography Bibliography
[PROJ]
Cartographic Projections Library, originally written by Gerald Evenden then of the USGS.
[SCRIP]
SCRIP Software Package, from the Los Alamos National Laboratory
[szip]
Szip compression software, developed at University of New Mexico.
[vonStorch]
Hans von Storch, Walter Zwiers: Statistical Analysis in Climate Research, Cambridge University Press
(1999)
[YAC]
YAC - Yet Another Coupler Software Package, from DKRZ and MPI for Meteorologie
239
A. Environment Variables
The following table describes the environment variables that affect CDO.
240
B. Parallelized operators
Some of the CDO operators are parallelized with OpenMP. To use CDO with multiple OpenMP threads,
you have to set the number of threads with the option ’-P’. Here is an example to distribute the bilinear
interpolation on 8 OpenMP threads:
cdo -P 8 remapbil,targetgrid infile outfile
241
Parallelized operators
242
C. Standard name table
The following CF standard names are supported by CDO.
243
D. Grid description examples
Figure D.1.: Orthographic and Robinson projection of the curvilinear grid, the first grid cell is colored red
244
Grid description examples Example description for an unstructured grid
245
Index
A daypctl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
abs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 dayrange . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
acos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 daystd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
add . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 daystd1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
addc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 daysub . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
addtrend . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 daysum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
adipot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 dayvar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
adisit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 dayvar1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
aexpr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 delcode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
aexprf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 delete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
after . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 delgridcell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
ap2pl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 delmulti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
apply . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 delname . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
asin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 delparam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
atan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 deltat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
atan2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 detrend . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
dhouravg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
B dhourmax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
bandpass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 dhourmean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
bitrounding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 dhourmin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
bottomvalue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 dhourrange . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
dhourstd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
C dhourstd1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
cat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 dhoursum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
changemulti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 dhourvar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
chcode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 dhourvar1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
chlevel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 diff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
chlevelc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 diffn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
chlevelv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 distgrid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
chname . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 div . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
chparam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 divc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
chunit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 divcoslat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
clone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 divdpm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
cmorlite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 divdpy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
codetab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 duplicate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
collgrid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 dv2ps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
consecsum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 dv2uv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
consects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
const . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 E
copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 enlarge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
cos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 ensavg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
ensbrs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
D enscrps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
dayadd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 enskurt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
dayavg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 ensmax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
daydiv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 ensmean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
daymax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 ensmedian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
daymean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 ensmin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
daymin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 enspctl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
daymul . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 ensrange . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
246
Index Index
G I
ge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 ifnotthen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
gec . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 ifnotthenc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
genbic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 ifthen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
genbil . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 ifthenc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
gencon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 ifthenelse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
gencon2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 import_amsr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
gendis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 import_binary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
genlaf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 import_cmsaf . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
genlevelbounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
gennn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 infon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
gh2hl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
gheight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 inputext . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
gmtcells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 inputsrv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
gmtxyz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 int . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
gp2sp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 intlevel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
247
Index Index
248
Index Index
249
Index Index
250
Index Index
251