0% found this document useful (0 votes)
139 views316 pages

Hysplit User Guide

The document provides an overview of the HYSPLIT model, including how to install it, required meteorological input data formats, trajectory and air concentration simulations, graphical output options, and pre-processing programs. The model calculates pollutant transport and dispersion and can run on Windows, Linux, and MAC operating systems.

Uploaded by

Quentin Cordeau
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
139 views316 pages

Hysplit User Guide

The document provides an overview of the HYSPLIT model, including how to install it, required meteorological input data formats, trajectory and air concentration simulations, graphical output options, and pre-processing programs. The model calculates pollutant transport and dispersion and can run on Windows, Linux, and MAC operating systems.

Uploaded by

Quentin Cordeau
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 316

HYSPLIT User's Guide

Last Revised: November 2023


HYSPLIT USER's GUIDE
Roland Draxler
Barbara Stunder
Glenn Rolph
Ariel Stein
Albion Taylor
Sonny Zinn
Chris Loughner
Alice Crawford

Version 5.3 - Last Revision: November 20231


TABLE OF CONTENTS

Abstract The HYSPLIT (Hybrid Single-Particle Lagrangian Integrated Trajectory) Model installation, configuration,
and operating procedures are reviewed. Examples are given for setting up the model for trajectory and concentration
simulations, graphical displays, and creating publication quality illustrations. The model requires specially preformatted
meteorological data. Programs that can be used to create the model's meteorological input data are described. The User's
Guide has been restructured so that the section titles match the GUI help menu tabs. Although this guide is designed to
support the PC and UNIX versions of the program, the executable of the on-line web version is identical. The only
differences are the options available through the interface.

Features

The HYsplit (HYbrid Single-Particle Lagrangian Integrated Trajectory) model is a complete system for computing
trajectories complex dispersion and deposition simulations using either puff or particle approaches.2 It consists of a
modular library structure with main programs for each primary application: trajectories and air concentrations.

Gridded meteorological data, on a latitude-longitude grid or one of three conformal (Polar, Lambert, Mercator) map
projections, are required at regular time intervals. The input data are interpolated to an internal sub-grid centered to
reduce memory requirements and increase computational speed. Calculations may be performed sequentially or
concurrently on multiple meteorological grids, usually specified from fine to coarse resolution.

Air concentration calculations require the definition of the pollutant's emissions and physical characteristics (if
deposition is required). When multiple pollutant species are defined, an emission would consist of one particle or puff
associated with each pollutant type. Alternately, the mass associated with a single puff may contain several species. The
latter approach is used for calculation of chemical transformations when all the species follow the same transport
pathway. Some simple chemical transformation routines are provided with the standard model distribution.

The dispersion of a pollutant is calculated by assuming either a Gaussian or Top-Hat horizontal distribution within a
puff or from the dispersal of a fixed number of particles. A single released puff will expand until its size exceeds the
meteorological grid cell spacing and then it will split into several puffs. An alternate approach combines both puff and
particle methods by assuming a puff distribution in the horizontal and particle dispersion in the vertical direction. The
resulting calculation may be started with a single particle. As its horizontal distribution expands beyond the
meteorological grid size, it will split into multiple particle-puffs, each with their respective fraction of the pollutant
mass. In this way, the greater accuracy of the vertical dispersion parameterization of the particle model is combined
with the advantage of having an expanding number of particles represent the pollutant distribution as the spatial
coverage of the pollutant increases and therefore a single particle can represent increasingly lower concentrations.

Air concentrations are calculated at a specific grid point for puffs and as cell-average concentrations for particles. A
concentration grid is defined by latitude-longitude intersections. Simultaneous multiple grids with different horizontal
resolutions and temporal averaging periods can be defined for each simulation. Each pollutant species is summed
independently on each grid.

The routine meteorological data fields required for the calculations may be obtained from existing archives or from
forecast model outputs already formatted for input to HYSPLIT. In addition, several different pre-processor programs
are provided to convert NOAA, NCAR (National Center for Atmospheric Research) re-analysis, or ECMWF (European
Centre for Medium-range Weather Forecasts) model output fields to a format compatible for direct input to the model.
The model's meteorological data structure is compressed and in "direct-access" format. Each time period within the data
file contains an index record that includes grid definitions to locate the spatial domain, check-sums for each record to
ensure data integrity, variable identification, and level information. These data files require no conversion between
computing platforms.

The modeling system includes a Graphical User Interface (GUI) to set up a trajectory, air concentration, or deposition
simulation. The post-processing part of the model package incorporates graphical programs to generate multi-color or
black and white publication quality Postscript printer graphics.

A complete description of all the equations and model calculation methods for trajectories and air concentrations has
been published3 and it is also available on-line. The on-line and the version included with the PC installation contains
all the most recent corrections and updates.

Pre-Installation Preparation

There are two installation programs that can be downloaded. The trial version (HYSPLIT_win{32|64}U.exe ~ 70 Mb)
available to anyone and a fully functional version (HYSPLIT_win{32|64}R.exe ~ 70 Mb) that requires a user registration
through the web site. Both versions identical, except the trial version will not work with forecast meteorological data
files. It is assumed that tcl/tk, web browser, and ImageMagick would have already been installed when the registered
version is installed on top of the trial version.

The self-installing executable contains only HYSPLIT related programs. No additional software is required to run a
simulation if the command line interface is sufficient. To enable the model's GUI, the computer should have Tcl/Tk
script language installed. The most recent version can be obtained over the Internet. The installation of Tcl/Tk will
result in the association of the .tcl suffix with the Wish executable and all Hysplit GUI scripts will then show the Tk
icon. The HYSPLIT GUI has been tested with Tcl/Tk version 8.6.7.

The primary HYSPLIT graphical display programs convert the trajectory and concentration model output files to
Postscript format or to Scalable Vector Graphics (SVG) format as HTML. The HYSPLIT GUI is configured to use SVG
that can be viewed directly through web browser. The HYSPLIT code and GUI have been tested with Mozilla Firefox
78.12.0esr. Installation to different default drives, directories, or other versions might require editing the main GUI
script's directory pointers (edit file: /guicode/hysplit.tcl or the directory entry in the Advanced-Configuration-Directories
menu tab.

The third optional GUI feature is the ability to convert the SVG output file to a different graphical formats using
ImageMagick. More information on this software can be found at The HYSPLIT code and GUI have been tested with
ImageMagick 6.4. Installation to a different default drives or directories as suggested by the installation process may
require editing the main GUI script's directory pointers. Although the setup script tests for the default language, it is
possible that installation to non-English default windows operating systems might require additional editing of the
directory pointers.

Windows Installation (all operating systems)


HYSPLIT installation to a computer running Windows is provided through a self-installing file. Executables are
installed in various directories for trajectories, dispersion, display and manipulation of results, and the creation of input
meteorological data files. The trajectory and dispersion model source code is not provided. However all the Fortran
source code to create meteorological data files in a format that the model can read are provided in the /data2arl
directory. Each subdirectory contains a @readme.txt file with more complete information about the contents of that
directory.

During the installation you will be prompted as to the directory location. It is suggested you select a simple default
location (such as C:/hysplit). The installation program installs all code and executables to your selected directory, and
creates a shortcut on the desktop to /guicode/hysplit.tcl with the "Start In" directory as your selected default. You may
have one of two versions of the installation program: HYSPLIT_win{32|64}{R|U}.exe. The suffixes R and U refer to
the Registered or Unregistered versions. The two versions are almost identical, except that the unregistered version does
not permit calculations with current forecast meteorological data.

HYSPLIT and all the related programs are available for either 32 bit or 64 bit operating systems. The 32 bit version can
be installed on either OS, while the 64 bit programs can only be installed on a 64 bit OS. An Apple MAC version is also
available (HYSPLIT_mac.dmg).

Installing on top of an old version will bring up a "Continue" or "Cancel" prompt. It is not possible to rename the
installation directory at this stage. Rename your old installation prior to installing the new code if you wish to keep the
original version.

The installation will contain several sub-directories, some of which are required for model execution, and some of
which provide additional documentation and other information. For instance ...

bdyfiles - This directory contains an ASCII version of gridded land use, roughness length, and terrain data. The
current file resolution is 360 x 180 at 1 degree. The upper left corner starts at 180W 90N. The files are read by
both HYSPLIT executables, hyts_std (trajectory model) and hycs_std (concentration model), from this directory.
If not found, the model uses default constant values for land‑use and roughness length. The data structure of these
files is defined in the file ASCDATA.CFG, which should be located in either the model's startup or /bdyfiles
directory. This file defines the grid system and gives an optional directory location for the landuse and roughness
length files. These files may be replaced by higher resolution customized user-created files. However, regardless
of their resolution, the model will only apply the data from these files at the same resolution as the input
meteorological data grid. More information on the structure of these files can be found in the local @readme.txt
file.

data2arl - Current forecast or archive meteorological data can be obtained from the ARL ftp server:
ftp://gus.arlhq.noaa.gov /pub/archives (or /forecasts). Older archive data can be ordered from the NCDC
(National Climatic Data Center). However if you have access to your own meteorological data or data formatted
as GRIB (Gridded Binary), this directory contains various example decoder programs to convert meteorological
data in various formats to the format (ARL packed) that HYSPLIT can read. Sample programs include GRIB
decoders for ECMWF model fields, NCAR/NCEP (National Centers for Environmental Prediction) re-analysis
data, and NOAA Aviation, ETA, and Regional Spectral Model files. All the required packing and unpacking
subroutines can be found in the /source subdirectories. Sample compilation scripts for Compaq Visual Fortran 6.6
are in some of the decoder directories. More information on how to run these programs can be found in the
Meteorology section.

examples - The directory contains several example scripts and batch files that can be used to create automated
simulations.

html - Contains all the HELP files in HTML format. These files can be displayed with any browser or
interactively through the GUI. The files that are opened in the GUI depend upon the context from which HELP is
invoked.

document - This directory contains PDF (Adobe Portable Document Format) versions of the User's Guide (all the
HTML help files put together in one document) and other documentation such as ARL-224, the principal ARL
Technical Memorandum describing the model and equations. This User's Guide (this document) provides detailed
instructions in setting up the model, modifications to the Control file to perform various simulations and output
interpretation. The @readme.txt file contains additional information about compilation, typical CPU times, and a
summary of recent model updates.

exec - Is the directory that contains all the executable programs. The GUI looks for all programs in this directory.
When running examples from the command prompt in certain directories, the relative path should be included
prior to the executable: "../exec/program.exe"

graphics - There are two types of graphical plotting programs provided in the ../exec directory. Publication quality
graphics can be created using the postscript conversion programs, concplot and trajplot, which use a Fortran
Postscript library created by Kevin Kohler4. All graphical routines use the map background file arlmap in this
directory. The map background file uses a simple ASCII format and contains the world's coastal and political
boundaries at relatively coarse resolution. Other higher resolution map background files are available in the
/graphics/mapfiles directory or from the HYSPLIT download web page. All graphical programs search the startup
directory first for arlmap before going to /graphics, therefore customized maps can be created without changing
the HYSPLIT installation structure.

guicode - This directory contains a Tcl/Tk GUI interface source code script for HYSPLIT. The interface is used to
set up the input Control file as well as run the graphical output display programs. To use the interface you must
first install Tcl/Tk. The upper-level Tcl script is called hysplit.tcl, which calls all other Tcl scripts. Executing this
script starts the HYSPLIT GUI. The Desktop shortcut as well as the Start Menu options should point to this script.
If the installation program did not properly setup the Desktop, you can manually create a shortcut to the script and
edit its properties such that the "Start In" directory is /hysplit. You should also select the HYSPLIT icon from the
/icons directory.

working - This is the Hysplit root directory, which contains sample CONTROL files that can be used for initial
guidance to set up more complex simulations. These should be loaded into the GUI from the appropriate
"Retrieve" menu tab. Examples include:

sample_conc - concentration simulation example from users guide

sample_traj - trajectory simulation example from users guide

The "plants.txt" file contains a sample listing of starting locations that can be opened in the GUI to select
from a list of previously determined starting locations. This file can easily be customized. The "tilelist.txt"
file contains the approximate coordinates of NCEP's NAM (North America Mesoscale model) tile domains.

Problems

If Tcl/Tk does not exist on your system or there are other problems with the GUI interface, it is very easy to run the
sample cases directly in the /working directory by running the batch file "run_{model}.bat" If the sample simulation
works well, then it is only necessary to manually edit the CONTROL file to try out different simulation variations. The
CONTROL file options are explained in more detail in the individual Trajectory and Concentration Setup sections.

In general, premature termination during the model initialization phase will result in messages to standard output.
However after the model has started, fatal, diagnostic, and progress notification messages are written to a file called
Message. If the model output is not what you expected, first check the Control file to determine if the input setup is
what is desired, then check the Message file for indication of abnormal performance. These files are always written to
the model's startup directory - /Hysplit if the model is run from the GUI. At times error messages may be lost in the
display buffer after premature termination. In this case the model should be rerun from the command line window for
proper display of all standard output messages. The "Advanced" menu contains a the View MESSAGES tab that
displays the last MESSAGE file for viewing. Other features of the advanced menu are used to modify the model's
configuration file and are explained in more detail in that section. Modifications to these parameters require a complete
understanding of the model's design and operation.

References

[1]Draxler, R.R., 1999, HYSPLIT_4 User's Guide, NOAA Technical Memorandum ERL ARL-230, June, 35 pp.

[2]Draxler, R.R., and G.D. Hess, 1998, An overview of the Hysplit_4 modelling system for trajectories,
dispersion, and deposition, Australian Meteorological Magazine, 47, 295-308.

[3]Draxler,
R.R., and G.D. Hess, 1997, Description of the Hysplit_4 modeling system, NOAA Technical
Memorandum ERL ARL-224, December, 24p.

[4]Stein,
A.F., R.R. Draxler, G.D. Rolph, B.J.B. Stunder, M.D. Cohen, and F. Ngan , 2015. NOAA’s HYSPLIT
atmospheric transport and dispersion modeling system, Bulletin of the American Meteorological Society. 96,
2059–2077. doi: http://dx.doi.org/10.1175/BAMS-D-14-00110.1

[5]PSPLOT libraries can be found at https://hcas.nova.edu/tools-and-resources/psplot/index.html and were created


by Kevin Kohler ([email protected]).
HYSPLIT User's Guide Help File Index
Model Overview

GUI Overview

METEOROLOGY - Data Overview

ARL Data FTP - External sources of data

Forecast and appended ARL archives for the last two days
Archive of long-term ARL formatted data
Set Server changes the name and directory of the FTP server

Convert to ARL - Convert data on local computer to ARL format

WRF-ARW Advanced Research WRF NetCDF


Global Lat-Lon European Centre or NOAA global lat-lon grids
ECMWF ERA ECMWF Reanalysis ERA-40 or Interim GRIB1 files
User entered single station and level user data entry

Display Data - Viewing HYSPLIT formatted meteorological data

Check File all the records in a meteorological file


Contour Map of a single meteorological variable
Text Profile of all variables at a single location
Grid Domain shows a map of meteorological grid locations

Utility Programs - Meteorology / Trajectory / Concentration Menus

GIS to Shapefile converts text GIS output to ESRI Shapefile


SVG to Image converts to another graphic format

Meteorology Help

Data Format description of the packed meteorological data format


Sample Programs to read ARL and GRIB1 data files

TRAJECTORY - Menu Overview

Quick Start for automatic configuration and execution


Setup Run creates the trajectory simulation CONTROL file
Save / Retrieve Options for custom simulations for future use
Run Model menu tab to execute the trajectory model

Display Options

Trajectory converts the endpoints file to a graphic


Frequency converts multiple trajectory files to a graphic

Utility Programs
Endpoints to IOAPI converts the endpoints file to IOAPI

Special Simulations - Trajectory simulations with special requirements

Test Inputs suggests changes to CONTROL and SETUP.CFG


Run Ensemble by varying location on meteorology grid
Run Matrix defines a grid of multiple source locations
Multi-time starts at trajectories regular intervals
Multi-space starts new trajectories along a trajectory
Run daily invokes script to do multiple simulations in time
Run Cluster Analysis to find which trajectories belong together
GeoLocation back trajectory analysis from sampling locations
Control File Format line-by-line description
Endpoints File Format description

CONCENTRATION - Menu Overview

Quick Start for automatic configuration and execution


Concentration Setup of the simulation CONTROL file
Pollutant, Deposition, and Grid menu to select items below:
Pollutant definition of the emission characteristics
Grid Definition or the air concentration sampling grid
Deposition definition of the pollutant deposition parameters
Run Model to start the concentration model

Display Options

Concentration Contours converts the binary model file to a graphic


Grid Values uses colorfill to show values on the concentration grid
Global Grid graphic optimized to plot global concentration grids
Particle displays horizontal and vertical particle distributions
Plume Arrival uses colorfill to show first arrival times after start
Source-Receptor View create graphic from a source-receptor matrix file
Source-Receptor Stats creates a statistics map for multi-source simulations
Ensemble View Map creates probability maps from multiple simulations
Ensemble Box Plot shows the probability distribution at one location
Ensemble Statistics computes ensemble statistical performance

Utility Programs

Binary File Merge adds or masks concentration binary files


Binary File Extract creates a temporal or spatial concentration file extract
Binary File Average adds or masks concentration binary files
Binary File Apply Source multiplies time-varying dispersion factors by source term
Convert to ASCII converts the binary concentration file to an ASCII format file
Convert to DATEM convert to DATEM format and compute verification statistics
Convert to Dose convert a binary file to dose using a dose factor data file
Convert to IOAPI converts the concentration binary file to IOAPI
Convert to Station extracts the concentrations at a station and writes to a file
Convert PARDUMP converts PARDUMP file to binary concentration file
Particle Adjustment adjust particle positions for improved initialization
Transfer Coefficient SVD solution using measurements to solve for the emissions
Transfer Coefficient Cost function minimization to determine the emissions vector

Special Simulations - Concentration simulations with special requirements

Test Inputs suggests changes to CONTROL and SETUP.CFG


Run daily invokes a script for multiple simulations in time
Run Matrix configures and runs dispersion for an array of locations
GeoLocate from measurements determine the emissions location
Ensemble Meteorology for simulations using meteorology grid offsets
Ensemble Turbulence for multiple turbulence simulations
Ensemble Physics for multiple physical parameterizations
Global runs a simulation with particle mass transfer to Eulerian global grid
Run Dust Storm for PM10 emissions from dust storms
Daughter Products from nuclear decay from nuclide

Related Concentration Help

Multi-Processor special simulations in multi-processor environment


Concentration File Format line-by-line description
PARDUMP format for particle displays or model restarts
Terrain Overlay using shapefiles to overlay terrain on plots
Special Topics for complex simulation scenarios

ADVANCED Overview of features for advanced users

Configuration Setup to edit the SETUP.CFG namelist file


Trajectory edit menu for trajectory simulations
Concentration edit menu for for concentration simulations
Global configures the global Eulerian model (grid-in-plume)
Dynamic Sampling configuration of the LAGSET.CFG file
Emissions File configuration of the point source file
Set GUI Directories configures the directory structure
Panel Labels edit menu for supplemental graphic map labels
Border Labels edit menu for map titles and units labels
Foot and Mouth Disease Virus (FMDV) decay configuration
User Defined Internal Vertical Levels configures user defined internal vertical levels

SETUP.CFG NAMELIST SUMMARY

Summary of all NAMELIST variable defaults


Time Steps
Trajectory Vertical Coordinate
Multiple Trajectories in Time
Trajectory Points Output Frequency
Meteorology Along the Trajectory
Meteorological Ensemble
Mixing Depth Computation Method
Concentration Vertical Coordinate
Release Particles or Puffs
Release Number Limits
Emission Cycling and Input
Turbulence and Dispersion Computations
Concentration Packing and Output Units
Input and Output Particle Files
In-Line Conversion Modules
Puff-Split-Merge Issues
Variables Not Set in GUI

File Formats

activity.txt for radiological dose calculations


ASCDATA.CFG for for terrain, land-use, and roughness
Concentration binary output file
CONTROL input file for trajectories
CONTROL part 1 input for concentrations
CONTROL part 2 input for concentrations
CONTROL part 3 input for concentrations
CONTROL part 4 input for concentrations
DATEM format for measured and calculated data
default_exec default program directory locations for GUI
EMITIMES input file for emissions
Endpoint output file for trajectories
LABELS.CFG for graphic border labels
LAGSET.CFG for dynamic sampling
MESSAGE file diagnostic information
Meteorological data in the ARL-HYSPLIT format
PARDUMP format for particle displays or model restarts
SETUP.CFG NAMELIST file variable defaults
Shapefiles after conversion from ESRI Generate
ZSG_LEVS.IN user defined internal vertical levels

Utilities - Listing of HYSPLIT utility programs including file conversion, graphics creation, meteorological data
editing, trajectory analysis, shapefile manipulation.

VERIFICATION - Model statistics with experimental data


GUI Overview
Although the model can be configured and run manually from the command line, it is usually much quicker to use the
GUI. A summary of the main features in each of the primary GUI menus is reviewed below. More detailed help
information can be found with the help button associated with each specific menu.

Meteorology / ARL Data FTP

Forecasts - Links to only the most recent forecast data for a variety of different meteorological models available
on the ARL FTP server. The data have already processed to a HYSPLIT compatible format. Appended data are
defined as a combination of analysis and short forecasts (e.g. 0h and +3h appended with each new 6h cycle) that
are operationally maintained as a rolling archive valid for 48 hours prior to the most recent forecast cycle.

ARL Archive - Provides access to the long-term data archives at ARL for various meteorological data that have
already been formatted for use by HYSPLIT. These data include the global GFS, GDAS, and NCAR/NCEP
reanalysis. Files inclusive to North America include various versions of NAM, HRRR, and WRF.

Set Server - A generic menu that can be customized to open different backup or user defined FTP servers to
download pre-formatted HYSPLIT compatible meteorological forecasts.

Meteorology / Convert to ARL

Many of the scripts linked in this menu will convert various data file formats into the data format used by
HYSPLIT. GRIB version 2 is not yet supported. However, conversion software from GRIB-2 to GRIB-1 can be
obtained from several different sources (NOAA-NCEP, ECMWF, WMO). All files must be on the local
computer. FTP is no longer supported through these menus.

WRF-ARW - Processes fields from NCARs Advanced Research WRF model into the ARL format. The data
conversion routines require NetCDF routines currently supported only on UNIX or LINUX platforms.

Global Lat-Lon - Archived GRIB-1 fields from the global model of ECMWF or NOAA may be processed locally
into ARL format. The procedure is designed only to work with archive data from their current operational model,
where the last four digits of the file name represent the day.

ERA-40 - Archive GRIB-1 fields from the ECMWF ERA-40 reanalysis may be processed locally into ARL
format. These files must already be present. The procedure is designed only to work with the ERA-40 data files.
Three different file types must be downloaded for this conversion: 3-d fields, 2-d surface fields, and one invariant
field.

User Entered - This menu tab is intended to convert user entered meteorological data to the ARL packed
HYSPLIT compatible format. The conversion program will create a data file with one or more time periods
containing a uniform field in space and height but varying in time. The grid is centered over the location specified
in the main menu and covers a 50 by 50 km domain.

Meteorology / Display Data

Check File - A program to list the information in the meteorology file index record as well as the ASCII header
portion of each data record.

Contour Map - A simple SVG contouring program that can be used to view the data fields in any ARL packed
meteorological data set. The output is always written to a file called contour.html. Multiple time periods may be
processed.

Text Profile - This program is used to extract the meteorological data profile interpolated to a user selected
latitude-longitude position. The raw data values at the surface and each level are output on the left while
temperature converted to potential temperature and winds rotated from grid orientation to true are shown on the
right. Output is shown on the screen and is also written to a file called profile.txt.

Grid Domain - Program to generate a map showing the domain of the meteorological data grid with marks at each
selected data grid point. Output is written to the file showgrid.ps.

Utilities

Convert SVG - Converts any SVG file to another format using ImageMagick. Enter the name of the SVG file and
the desired name of the output file, with the file suffix indicating the desired conversion.

GIS to Shapefile - When the GIS output option is selected in selected display programs an ASCII text file (in
generate format) is created that contains the latitude-longitude values of each contour. This GIS text file can be
converted to an ERSI Arc View shapefile. This menu tab is available from three different points. From the
meteorology tab the conversion is "line", from the trajectory tab the conversion is "points", and from the
concentration tab the conversion is "polygon".

Trajectory

Quick Start - menu tab can be used to run any simulation in one-step. The last configuration will be used for the
simulation. The menu brings up a global map with the current source location shown as a blue dot which can be
moved to a new location. The Run Example menu tab resets the control and namelist files and reruns the example
case. This can be used as a starting point for configurations that have been corrupted.

Setup Run - This menu is used to write the CONTROL file that defines all the model simulation parameters. Note
that once a simulation has been defined it can be "Saved As" and later " Retrieved." Note that the setup must be
"Saved" before proceeding to the "Run Model" step.

Run Model - The selection opens the window for standard output and starts the execution of "hyts_std.exe".
Incremental progress is reported on WinNT and later systems, however on LINUX the display remains frozen
until the calculation has completed (a beep will sound).

Display - Display options are available to label the time increment along the trajectory and the amount of white
space surrounding the trajectory on the map. The HIRES option limits the domain to just fit the trajectory. Output
is written to the file trajplot.ps. The plot title can be defined in a file called labels.cfg which should be placed in
the \working directory. The special frequency display can be used to plot the spatial frequency distribution of
multiple trajectory simulations.

Special Runs - The Ensemble option invokes a special executable that will run multiple trajectories from the same
location, with each trajectory using meteorological data slightly offset from the origin point. The Matrix option
simultaneously runs a regular grid of trajectory starting locations. Also available are multiple trajectory options in
both time and space as well as the daily menu to run a series of automated trajectories for multiple days or months
per execution. Multiple trajectories can also be clustered.

Concentration

Quick Start - menu tab can be used to run any simulation in one-step or rerun the example simulation. It's
functionality is similar to the trajectory quick start menu.

Setup Run - The menu is almost identical to the Trajectory Setup menu except for the added tab for "Pollutant,
Deposition, and Grids", which is used to customize the concentration computation. Unlike trajectories, which are
terminated at the top of the model domain, pollutant particles reflect at the model top.

Run Model - Similar in function to trajectory run menu, executes "hycs_std.exe".


Display - Output is written to concplot.ps. Options on the menu permit concentration contours to be dynamically
scaled to the maximum for each map (default) or fixed for all maps based upon the highest value on any one map.
Other display programs are available for the color fill of uncontoured gridded data or the horizontal or vertical
display of the particle distributions and the computation of plume arrival times. Note that the units label as well as
the plot title can be defined in a file called LABELS.CFG which should be placed in the \working directory.
Special display programs are available for a source-receptor matrix, ensemble, and statistical simulation outputs.

Utilities

Convert to ASCII - Writes an ASCII file for each time period of concentration output with one record for every
grid point that has a non-zero concentration value. Each record is identified by the ending date and time of the
sample and the latitude and longitude of each grid point. The output file name is constructed from the name of the
[input file]_[julian day]_[output hour]. Some other output options are available.

Grid to Station - This program is used to extract the time- series of concentration values for a specific location.
That location may be specified by a single latitude-longitude through the GUI or multiple stations may be defined
by entering the name of a file that contains an integer station number, latitude, and longitude on each record. The
output is written to con2stn.txt which can be read by the time series plotting program to produce the file
timeplot.ps. Note that the time series plot is always in integer units and therefore may require the specification of
a units multiplication factor.

Convert to DATEM - Conceptually similar to Grid-to-Station, this menu is used to convert a binary file to the
DATEM format and then compute model performance statistics. A data file with measurement data in DATEM
format must already exist, and the conversion program will match each model calculated concentration with a
measurement, compute performance statistics, and display a scatter diagram of measured and calculated values.

Merge Binary Files - Multiple binary concentration files may be added together for a sum, or the maximum value
is retained at each grid, or one of the input files may be used as a zero mask, resulting in zero concentrations in
the other file when the mask file shows a non-zero value.

Special Runs

Daily - Runs a script to rerun the last dispersion model configuration each time with a different starting day and
time for days or months. Output files are labeled with the starting time or a sequential number. This script can
also be used to create a time dependent Transfer Coefficient Matrix (TCM) when the run length is reduced with
each iteration so that all dispersion simulations end at the same time.

Matrix - Runs the matrix pre-processor to create a CONTROL file with a grid of starting locations. The source-
receptor conversion option needs to be selected in the advanced menu to create the coefficient matrix required for
source attribution applications.

Geolocation - Runs a special pre-processor that reads a file of measured sampling data to automatically configure
a CONTROL file for each sample which is then run in a script to create a source attribution function for each
measured sample.

Ensemble / Meteorology - An application similar to the trajectory ensemble, in that multiple simulations are run
each with a different offset in the meteorological data to create an ensemble showing the sensitivity of the
simulation to gradients in the meteorological data fields.

Ensemble / Turbulence - Instead of starting each simulation with the same random number seed, multiple
simulations are run, each with a different seed, which then shows the concentration sensitivity to the turbulence.

Ensemble / Physics - The physics ensemble varies different turbulence and dispersion namelist parameters from
the base simulation to develop an ensemble showing the sensitivity of the calculation to various model physics
assumptions.
Global - The global special run invokes an executable that contains a global Eulerian module to permit particle
mass to be transferred to the global model at a specified age. A concentration output grid for each computation is
created. One representing local/regional scale plumes, the other the contribution to global background
concentrations.

Dust Storm - Invokes a pre-processor that creates a CONTROL file with each location within the selected
computational domain that contains a desert land-use designation. This option should be used in conjunction with
the advanced menu option to turn on the dust emission module.

Daughter Products - Invokes a pre-processor that creates a DAUGHTER.TXT file containing the daughter
nuclides produced by a parent nuclide along with the half-life and branching fractions. In addition a CONTROL
file is modified to include the daughter product information. This option should be used in conjunction with the
advanced menu option to turn on the daughter product module and set MAXDIM to the number of daughters.

Advanced Topics

Contains several menus for custom model configurations that can be used to change the nature of the simulation
and the way the model interprets various input parameters. These menus would be used to configure matrix,
ensemble, global, pollutant transformations, and model initialization options.

Trajectory Configuration - Creates the SETUP.CFG namelist file for trajectory calculations.

Concentration Configuration - Creates the SETUP.CFG namelist file for concentration-dispersion calculations.

Global Configuration - Creates the GEMPARM.CFG namelist file to configure the global Eulerian model
subroutines.

Dynamic Sampling - Creates a virtual sampler that can fly through the computational domain, either passively
with the wind or with a pre-defined velocity vector.

Emissions File - Opens a simple editor to create a special emission file that can define point sources that vary in
space, time, and intensity.

Center Reset Button

This will cause any changes to the GUI variables to be set back to the sample trajectory and concentration simulation
case which uses the oct1618.BIN sample meteorological file. This button loads the sample_conc and sample_traj control
files from the working directory.

Table of Contents
Advanced / Special Topics / ASCDATA.CFG
The bdyfiles subdirectory contains ASCII files for gridded land-use, terrain, and roughness length data for HYSPLIT.
The terrain file is used by some of the programs in data2arl if the terrain is not provided with the other meteorological
fields. The resolution for these files is one-degree covering the globe: 360 by 180 grid points. The upper left corner
starts at 180W - 90N. If these files are not found, the model uses default constant values for land-use and roughness
length.

The structure of these files is given in the ASCDATA.CFG file, which defines the grid system and gives the directory
location of the land-use, terrain, and roughness length files. The ASCDATA.CFG file should be located in the model's
startup or root directory. The last line in the file should be modified to reflect the path to the bdyfiles directory.

File Format

The ASCDATA.CFG file contains the following six records:

-90.0 -180.0 lat/lon of lower left corner (last record in file)


1.0 1.0          lat/lon spacing in degrees between data points
180 360         lat/lon number of data points
2                   default land use category
0.2                default roughness length (meters)
'../bdyfiles/'     directory location of data files

These files may be replaced by higher resolution user created files. Note that the first data point on the first record is
assumed to be centered at 179.5W and 89.5N, so that from the northwest corner the the data goes eastward and then
southward. User supplied files should define roughness length (cm - F4) and the following land use-categories (I4). The
record length of the file should be the number of longitude values times four bytes plus one additional byte for CR.

The 11 Land-Use Categories


1. -urban
2. -agricultural
3. -dry range
4. -deciduous
5. -coniferous
6. -mixed forest
7. -water
8. -desert
9. -wetlands
10. -mixed range
11. -rocky

The One-Degree Terrain


Table of Contents
Meteorology / Help / Sample Programs
The source code for many different meteorological data applications can be found in the data2arl directory. A summary
of these programs and some others where only the executable is provided are given below. Some of these are available
for execution through the GUI, others must be run from the command line. All executables can be found in the exec
directory.

GRIB records

The various programs and library routines that are required to convert GRIB formatted meteorological data files to ARL
HYSPLIT compatible format can be found in the data2arl directory. Some of these are summarized below:

content - decodes individual GRIB sections in a record for diagnostic purposes. This program does not unpack the data
but only lists the contents of the GRIB file.

inventory - decodes all the records within a GRIB file (without unpacking) providing file content information.

unpacker - decodes GRIB records in a data file to a real data array.

sample - creates a packed meteorological file using dummy fields hardwired into the program. The input meteorological
data subroutines should be replaced by routines reading user supplied meteorological data files.

grib2arl - a generic program to convert ECMWF or NOAA global data from ECMWF or NOAA in GRIB1 format on a
global latitude longitude grid to the ARL format. ECMWF data may be on the native hybrid sigma or pressure surfaces.
For computational purposes, HYSPLIT requires either surface pressure or terrain height as a surface field in the
meteorological data file. The default is that surface pressure is assumed to be available in the input GRIB file, otherwise
the terrain height is required. This option is set using the "-p" flag. ECMWF GRIB files may contain upper-level
variables in one file, surface variables in another file, and invariant data in a third file. In this situation, the upper-level
data are considered the primary file "-i" and the surface data are the supplemental file "-s", and the invariant data are the
constant file "-c".

-i[primary grib data: file name {required}]


-s[supplemental grib data: file name {optional}]
-c[constant grib data: file name {optional}]
-x[subgrid extract center longitude {-80.0}]
-y[subgrid extract center latitude {60.0}]
-g[output projection {3}]
0 :conformal extract
1 :fixed northern hemisphere polar
2 :fixed southern hemisphere polar
3 :lat-lon global grid (as input)
4 :lat-lon extract grid
-n[number of (x:y) extract grid points {100}]
-k[number of output levels including sfc {16}]
-p[surface defined by {1}:pressure or 0:terrain height]
-q[analyze grib file {0} or use saved configuration: 1]
-r[rain fall accumulation time hours: {6}]
-t[the number of time periods to process: {744}]
-z[zero initialization of output file 0:no {1}:yes]

fcsubs - library routines required for direct access to variable length records

cmapf - library routines to convert from latitude-longitude to a conformal map projection


w3arl - library routines of the ARL version of the NCEP w3lib GRIB record decoder.

ARL Packed Format

The various programs and library routines that are required to manipulate or display meteorological data already in the
ARL packed format can be found in the exec directory. Note that when these programs are run from the command line
there will be a typical two line prompt, the first for the directory and second for the file name. A directory name should
always have the proper terminator (/ or \). The following programs can be found:

chk_rec - program to dump the first 50 bytes of each meteorological data record. Those bytes contain ASCII data
describing the packing.

chk_file - program to examine header and data records of an ARL packed meteorological data file. The program uses
the same I/O subroutines common to HYSPLIT code. If this program does not work with a data file, neither will
HYSPLIT.

chk_data - a simple program that shows how to read and unpack the ARL format packed meteorological data.

contour - creates SVG file of meteorological data contoured and color filled for a single variable at a specified time. The
output written to contour.html

display - creates SVG file of meteorological data contoured and color filled for a single variable at a specified time. The
output written to display.html. This program is prompted interactive version of the command line program contour.

edit_miss - a simple missing data interpolation program to fill in data gaps.

profile - creates text file of the profile of meteorological variables at a specified location and time. The output is written
to the screen and to the file profile.txt.

showgrid - shows the extent of the meteorological grid domain with a "+" symbol at the intersection of each node. The
output is written to showgrid.html

xtrct_grid - extracts a subgrid from the full grid meteorological data file. It permits selection by lat-lon corners and
number of levels from the ground. Creates output file called extract.bin

xtrct_time - extracts data between two selected time periods from the designated meteorological file. Creates an output
file called extract.bin

Table of Contents
Trajectory / Setup Run
When the Setup Run tab is executed, the default_traj file is read and the current parameter values are loaded. The menu
for the example simulation (sample_traj) is shown below. The options shown on the menu correspond with the various
lines in the CONTROL file. See the discussion of the control file format for a more detailed description for each of
these parameters. However there are some features of the GUI that require additional explanation.

Clicking the Setup Starting Locations tab brings up the menu shown below. If the number of starting locations were to
have been changed from the default value of three on the main menu, then there would be that number of starting
location lines set on the menu, all with the same latitude and longitude. These could then be manually edited for
different locations or starting heights. The starting height is by default defined to be above-ground-level (AGL) unless
this is changed in the Advanced/Configuration/Trajectory menu. Another possibility would be to click on the List tab,
which brings up a list of pre-selected starting locations from a file called plants.txt, which can be found in the
../hysplit/working directory. This file can be edited to reflect starting locations of interest to the user.
Another important feature of the main menu is how to select or add meteorological data files. The Clear button will
erase all file selections, then pressing the Add Meteorology Files tab brings up the file selection dialog shown below.
Select a file and click Open and the file will be added to the main menu. For each additional file, it is necessary to click
again on the Add Meteorology Files tab. With each new file the selected files number is incremented by one.

Once the simulation is configured as required, click the Save menu tab. This causes the GUI menu to over-write the
values in default_traj. Note that the format of default_traj is identical to the CONTROL file. Clicking on Save closes the
menu and then when the Run Model tab is executed, default_traj is first copied to CONTROL, and then the trajectory
model is run.

When the GUI menu system is restarted, it loads the last values stored in default_traj. The default_traj file may also be
saved to another name to permit future similar simulations to be set up more quickly. This option is available through
the Save As and Retrieve menu tab of the setup menu.

Table of Contents
Concentration / Setup Run
The initial setup menu for the concentration model is identical to the trajectory setup run menu in terms of starting time,
location, and meteorology. These items will not be discussed again except to note the differences when applied to a
dispersion simulation. The meaning of the entries in the CONTROL file that correspond to this setup menu are discussed
in more detail below and should be reviewed to appreciate how the change in the simulation type (from trajectory to
concentration) changes the meaning and context of the same input parameters. These parameters consist of the initial
entries of the CONTROL file. The initial concentration setup menu is shown in the illustration below.

The entries in the Control file for air concentration simulations consist of four groups of input data. The first data group
is almost identical to the trajectory simulation and is described in the next section. The other three groups define the
pollutant emission characteristics, the concentration grid in terms of spacing and integration interval, and the pollutant
characteristics relevant to computing deposition and removal processes. These latter three entries are accessed through
the "Pollutant, Deposition, and Grids Setup" tab. Each of the these sections contains a more detailed description of the
input parameters as well as the corresponding CONTROL file values that need to be set for command line simulations.

Initial CONTROL File Section

The concentration model input control file can be created using any text editor. However if the GUI is not being used, it
would be easier to let the model create the initial file based upon standard output prompts. These are described in more
detail below. When data entry is through the keyboard (a file named CONTROL is not found), a STARTUP file is
created. This contains a copy of the input, and which later may be renamed to Control to permit direct editing and model
execution without data entry. If you are unsure as to a value required in an input field, just enter the forward slash (/)
character, and the indicated default value will be used. This default procedure is valid for all input fields except
directory and file names. An automatic default selection procedure is also available for certain input fields of the
CONTROL file when they are set to zero. Those options are discussed in more detail below. Each input line is numbered
(only in this text) according to the order it appears in the file. A number in parenthesis after the line number indicates
that there is an input loop and multiple entry lines may be required depending upon the value of the previous entry.

1- Enter starting time (year, month, day, hour, (minute, optional))

Default: 0 0 0 0

Enter the two digit values for the UTC time that the calculation is to start. Use 0's to start at the beginning (or end)
of the file according to the direction of the calculation. All zero values in this field will force the calculation to use
the time of the first (or last) record of the meteorological data file. In the special case where year and month are
zero, day and hour are treated as relative to the start or end of the file. For example, the first record of the
meteorological data file usually starts at 0000 UTC. An entry of "00 00 01 12" would start the calculation 36
hours from the start of the data file.

The minute field is optional. If the minute field is not present, then the default value of 0 will be used.

2- Number of starting locations

Default: 1

Single or multiple pollutant sources may be simultaneously tracked. The emission rate that is specified in the
pollutant menu is assigned to each source. If multiple sources are defined at the same location, the emissions are
distributed vertically in a layer between the current emission height and the previous source emission height. The
effective source will be a vertical line source between the two heights. When multiple sources are in different
locations, the pollutant is emitted as a point source from each location at the height specified. Point and vertical
line sources can be mixed in the same simulation. The GUI menu can accommodate multiple simultaneous
starting locations, the number depending upon the screen resolution. Specification of additional locations requires
manual editing of the CONTROL file. Area source emissions can be specified from an input file: emission.txt.
When this file is present in the root directory, the emission parameters in the CONTROL file are superseded by
the emission rates specified in the file. More information on this file structure can be found in the advanced help
section.

3(1)- Enter starting location (lat, lon, meters, Opt-4, Opt-5)

Default: 40.0 -90.0 50.0

Position in degrees and decimal (West and South are negative). Height is entered as meters above ground level
unless the mean-sea-level flag has been set.

The optional 4th (emission rate - units per hour) and 5th (emission area - square meters) columns on this input line
can be used to supersede the value of the emission rate (line 12-2) when multiple sources are defined, otherwise
all sources have the same rate as specified on line 12-2. The 5th column defines the virtual size of the source:
point sources default to "0".

4- Enter total run time (hours)

Default: 48

Sets the duration of the calculation in hours. Backward calculations are configured by setting the run time to a
negative value. See the discussion in the advanced help section on backward "dispersion" calculations.

5- Vertical motion option (0:data 1:isob 2:isen 3:dens 4:sigma 5:diverg 6:msl2agl 7:average 8:damped)
Default: 0

Indicates the vertical motion calculation method. The default "data" selection will use the meteorological model's
vertical velocity fields; other options include isobaric, isentropic, constant density, constant internal sigma
coordinate, computed from the velocity divergence, a special transformation to correct the vertical velocities
when mapped from quasi-horizontal surfaces (such as relative to MSL) to HYSPLIT's internal terrain following
sigma coordinate, and a special option (7) to spatially average the vertical velocity. The averaging distance is
automatically computed from the ratio of the temporal frequency of the data to the horizontal grid resolution.

6- Top of model domain (internal coordinates m-agl)

Default: 10000.0

Sets the vertical limit of the internal meteorological grid. If calculations are not required above a certain level,
fewer meteorological data are processed thus speeding up the computation. Trajectories will terminate when they
reach this level. A secondary use of this parameter is to set the model's internal scaling height - the height at
which the internal sigma surfaces go flat relative to terrain. The default internal scaling height is set to 25 km but
it is set to the top of the model domain if the entry exceeds 25 km. Further, when meteorological data are provided
on terrain sigma surfaces it is assumed that the input data were scaled to a height of 20 km (RAMS) or 34.8 km
(COAMPS). If a different height is required to decode the input data, it should be entered on this line as the
negative of the height. HYSPLIT's internal scaling height remains at 25 km unless the absolute value of the
domain top exceeds 25 km.

7- Number of input data grids

Default: 1

Number of simultaneous input meteorological files. The following two entries (directory and name) will be
repeated this number of times. A simulation will terminate when the computation is off all of the grids in either
space or time. Calculations will check the grid each time step and use the finest resolution input data available at
that location at that time. When multiple meteorological grids have different resolution, there is an additional
restriction that there should be some overlap between the grids in time, otherwise it is not possible to transfer a
particle position from one grid to another. If multiple grids are defined and the model has trouble automatically
transferring the calculation from one grid to another, the sub-grid size may need to be increased to its maximum
value.

While not available in the GUI, if the user is creating the CONTROL file themselves, two numbers can be
specified here: the first being the number of unique grids and the second being the number of files in each grid.
For example, an entry of 2 12 would mean that there are met files for 2 different grids (e.g., a regional and a
global grid), and that there are 12 files being specified for each grid. The grids should be specified in order of
resolution, with the highest resolution grids (i.e, the smallest horizontal spacing between grid points) being
specified before lower resolution grids. The two entries for each file (directory and filename) are repeated for each
file in the first grid, and then for each file in the second grid, and so on, for any subsequent grids. Note that the
same number of files are required for each grid in this approach. Without the use of this approach (i.e., when only
one number is specified) the maximum number of files that can be used in the simulation is relatively small, but
with this second approach, a much larger number of files can be used in the simulation.

8(1)- Meteorological data grid # 1 directory

Default: ( \main\sub\data\ )

Directory location of the meteorological file on the grid specified. Always terminate with the appropriate slash (\
or /).
9(2)- Meteorological data grid # 1 file name

Default: file_name

Name of the file containing meteorological data. Located in the previous directory.

Next Section of the CONTROL file

Table of Contents
Advanced / Special Topics / Message File Format
A MESSAGE file is created during each simulation. The file contains information from certain key subroutines that can
be used for diagnostic purposes if a simulation were to fail. An example file is shown below in the left column from the
test concentration simulation. A trajectory MESSAGE file would be similar but without the particle number and mass
information. In the right column, in red, a more detailed description of each item is provided.

Some common warning messages

WARNING EMSPNT: exceeding puff limit


This warning occurs when the number of computational particles or puffs in the simulation reaches the number set by
MAXPAR in the SETUP.CFG file. The model will not allow the simulation to have more than the number of particles
or puffs specified by MAXPAR. When that number has been reached, the model will not create more particles or puffs.
For particles, this will prevent any new emissions to occur.
For puffs, this will both prevent any new emissions and will prevent any puff splitting.
Solutions include increasing MAXPAR, decreasing NUMPAR,

WARNING DEPSUS: exceeding puff limit


This warning occurs when the user has a value set for the particle resuspension and the number of particles carried in the
simulation has exceeded the maximum number set in the SETUP.CFG file. It will occur in conjunction with the
previous warning. Utilizing this pollutant resuspension algorithm may lead to excessive particle generation See #31 .

Description of File Format

Start Namelist configuration

Internal grid parameters (nlvl,aa,bb,cc): 19 30.0 -25.0 5.0


&SETUP INITD = 4, KHMAX = 9999, NUMPAR = 500, MAXPAR = 10000, MAXDIM = 1,
QCYCLE = 0.0000E+00,
FRME = 0.10000000,
FRMR = 0.0000E+00,
KRND = 6,
DELT = 0.0000E+00,
ISOT = 0,
TKER = 0.50000000,
NDUMP = 0,
NCYCL = 0,
TRATIO = 0.75000000,
MGMIN = 10,
KMSL = 0,
NSTR = 0,
CPACK = 1,
ICHEM = 0,
NOUT = 0, TM_PRES = 0, TM_TPOT = 0, TM_TAMB = 0, TM_RAIN = 0, TM_MIXD = 0,
DXF = 1.0000000,
DYF = 1.0000000,
DZF = 9.9998E-03,
NINIT = 1,
PINPF = PARINIT,
POUTF = PARDUMP
-- End Namelist configuration ---
NOTICE main: pollutant initialization flags
Gas pollutant - T
Current version and release date
Tabulates value of namelist variables

Number of internal model sigma levels and the polynomial parameters used to describe the vertical grid. These
are configured automatically based upon the meteorological input data files defined for this simulation.

The value of all variables that can be defined by the SETUP.CFG namelist file are listed. If no SETUP.CFG file
was defined for this simulation, then the default values for these variables are listed.

The settings of all internal deposition flags are shown here. In this case the simulation is for a gas.

NOTICE metpos: (mtime,ftime) - 50379840 0


NOTICE metpos: (mtime,ftime) - 50379960 0
NOTICE advpnt: (kg,nx,ny,nz) - 1 10 10 19
NOTICE sfcinp: reading ASCDATA.CFG
NOTICE metgrd: (kg, xyr,xy1) - 1 10 10 17 9
NOTICE metinp: NGM 1 1 10 10 17 9 50379840 95 10 16 0
#################################
WARNING prfsig: extrapolation from level (k,m): 16 7159.121
Input data levels: 10
Internal Sigma levels: 19
##################################
NOTICE metinp: NGM 1 63 10 10 17 9 50379960 95 10 16 2
NOTICE main: Initial time step (min) 20
NOTICE main: 1 50379860166 0.33333320
NOTICE main: 1 50379880332 0.6666647
NOTICE main: 1 50379900498 0.9999961
NOTICE main: 2 50379930498 0.9999961
NOTICE main: 2 50379960498 0.9999961
NOTICE metpos: (mtime,ftime) - 50380080 50379840
NOTICE metinp: NGM 1 125 10 10 17 9 50380080 95 10 16 4

The subroutine that determines which meteorological data are required is called for the first time. Data for times
840 and 960 are requested. The zeros for the second field indicate no data are in memory. Times are always in
relative minutes.

The first advection entry sets sub-grid #11 to 10x10 with 19 levels.

The surface boundary files are opened.

The lower left corner of sub-grid #1 is set at position 17,9 of the main meteorological data grid.

The NGM data for the first computational hour are read starting at record #1, loaded into a 10x10 sub-grid, corner
17,9, at time [x]840 for the date: 95 10 16 0

When these data are interpolated to the internal grid, it is determined that there are no input data records above
7159 m, therefore data for those levels are extrapolated.

Computations for the first hour require data at two time periods for interpolation (hours 0 and 2).

The initial time step was set to 20 minutes. Subsequent time steps may change.

The time, number of particles, and the total mass is shown for the three time steps of the first hour. After one hour
the emission stops.
During hour two, no further emissions (or particles) are released. The time step is now 30 minutes.

After the hour 2, new NGM data are required. The data in memory at 0 UTC (time 840) are replaced with data at
4 UTC (time 080).

The new data are input into the same sub-grid location.

NOTICE main: 3 50379990498 0.9999961


NOTICE main: 3 50380020498 0.9999961
NOTICE main: 4 50380050498 0.9999961
NOTICE main: 4 50380080498 0.9999961
NOTICE metpos: (mtime,ftime) - 50380200 50379960
NOTICE metinp: NGM 1 187 10 10 17 9 50380200 95 10 16 6
NOTICE main: 5 50380110498 0.9999961
NOTICE main: 5 50380140498 0.9999961
NOTICE main: 6 50380170498 0.9999961
NOTICE main: 6 50380200498 0.9999961

Index Height %Mass


6 935.0 1.41
5 630.0 10.84
4 385.0 31.33
3 200.0 29.72
2 75.0 20.48
1 10.0 6.22

Computations proceed as before for computational hours 3 and 4. Particle number remains the same because no
particles have moved off the computational domain. The mass remains the same because deposition is not turned on for
this simulation.

At the end of hour 4, data are required for 6 UTC (time 200) to proceed with the calculation. These data replace the 2
UTC (time 960) in memory.

Every 6 hours, the model prints out the vertical mass distribution of all the particles within the computational domain.
This is the mass distribution relative to the model's internal sigma levels and there is no relation to the levels that may be
specified for the concentration output file. Only non-zero levels are shown. The internal levels are defined by the
polynomial parameters given at the beginning of the MESSAGE file.

At this point the computation will continue for the number of hours specified in the CONTROL file. Vertical profiles
are shown every 6 hours. As the particles move across the computational domain, the sub-grid position may be moved
(from position 17,9) or expanded larger than 10x10) to match the spatial extent of the particle distribution. This may
occur at any time during the computation or even multiple times during the same computational hour.

Table of Contents
Advanced / Help
This section provides some guidance in configuring the model to perform certain specialized calculations. These include
deciding between particle or puff releases, dealing with continuous emissions, creating an file with time variations of the
emission rate, unit-source simulations with time-varying emissions, area source emissions, multiple pollutants, pollutant
transformations, atmospheric chemistry, deposition and decay, compilation limits, scripting for automation applications,
backward dispersion for source attribution, configuring the time or spatial variation of the emission rate using an input
file, and how to compute deposited pollutant transport on ocean water surfaces. The Advanced menu is composed of
four sections.

1. The Configuration Setup menu permits the creation and modification of the SETUP.CFG namelist file for either
Trajectory or Air Concentration calculations. The namelist file is a variable length file that is used to set
additional parameters that can be used to modify the nature of the simulation. The namelist file is not required
because all the namelist variables take on default values when the SETUP.CFG file is not found. The same
namelist file can be used for either trajectory or air concentration simulations but only certain variables are
applicable to each simulation. Supplemental plot labeling options are available, a menu to configure a dynamic
sampler that can move in space and time for use with the concentration simulation, and a menu to configure the
default directory structure and location of executable programs used by the GUI. More complex point source
emissions scenarios can be defined by creating the EMITIMES emissions file.

2. The View MESSAGES menu is used to display the diagnostic message file. For all trajectory or concentration
simulations, diagnostic information is written to standard output until the initialization process has completed. At
that point the MESSAGE file is opened and subsequent diagnostic and certain error messages are written to this
file. If the model does not complete properly, some additional diagnostic information may be obtained from this
file.

Table of Contents
Meteorology / Help
Meteorological data fields to run the model are usually already available or can be created from routine analysis
archives or forecast model outputs. More complete descriptions of the different data sets are available on-line. The
meteorological data available through the menu system are summarized in the following sections.

The meteorology menu is divided into four sections: data FTP, data conversion, data display, and utility programs.

The FTP section provides access to various data files that may be FTP'd from the ARL server compatible for
immediate input to HYSPLIT. These files include current and appended forecasts as well as various regional and
global analysis archives. All data files have already been converted for HYSPLIT applications.
The convert menu gives data processing options for files that are stored on the local computer. Processing of
forecast or analysis files are available for archive, forecast, and regional or global data files in various formats.
Processing for GRIB1 formatted input files is available for Global ECMWF/NOAA, and ERA-40. NetCDF is
supported on UNIX systems permitting conversion of WRF-ARW data files. In addition, a simple interface has
been created to permit the entry of user generated time-varying data at a single location.
The display data menu tab provides some simple tools that can be used to examine the data already in ARL
packed format. The most basic is the check file listing of the individual data records. The contour map program
will display a contour plot of any variable at any one time in the file. Multiple time periods may be processed. The
profile program returns a text file of the profile of meteorological parameters at a pre-selected point. The grid
domain menu shows the extent of the meteorological grid with a mark at each point.
The utility menu permits conversion of SVG files to any other graphical format and the creation of shapfiles.

More detailed information about the format of the ARL packed data is available. In addition, various library routines
and utility programs are provided to manipulate both the GRIB and ARL packed data files.

Table of Contents
Meteorology / ARL Data FTP
The ARL web server contains several meteorological model data sets already converted into a HYSPLIT compatible
format in the public directories. Direct access via FTP to these data files is "hardwired" into the GUI. The data files are
automatically updated on the server with each new forecast cycle. Only an email address is required for the password to
access the server. The "FTP menu" is locked until the FTP process is completed, at which point a message box will
appear indicating the file name that was just transferred. Alternatively, there is an option to download the files directly
from the NOAA nomads website via HTTP (http://http://nomads.ncep.noaa.gov/pub/data/nccf/com/hysplit/prod/). Some
of the files are not available on NOMADS (see below).

Note that the NOAA models run by NCEP may be at higher spatial resolution than what is archived in a HYSPLIT
compatible format on the ARL server. If calculations using higher resolution data are required, then the original GRIB
encoded files must be obtained and converted as described in the data conversion section. GRIB decoding can be
difficult. Although the required software is provided through the GUI, the large file size downloads may limit the
practical application of these real-time data files.

Meteorology / Data FTP / Current Forecast

Forecast data are available for NOAA's Global Forecast System (GFS), the North America Model (NAM), and the
High-Resolution Rapid Refresh (HRRR). Forecast model data are available at various resolutions, projections, and
forecast time durations. The files are updated at two to four hours after the cycle time. Forecast files are available for the
last several days depending upon the server and not all files are on all servers. All forecast files follow the same file
naming convention of hysplit.t{cycle}z.{name}, where cycle represents the initial UTC time of the forecast (e.g.
00,06,12,18) and the names are as follows:

gfsf :: 1-deg 3P +240h (814 Mb) global forecast at one-degree resolution at 3 hour intervals at pressure levels out
to +240 hours.
gfslrf :: 1-deg 6P +384h (251 Mb) long-range global forecast at one-degree resolution at 6 hour intervals at
pressure levels and from forecast hours +240 to +384.
gfs0p25f :: 0.25-deg 3S +189h (2500 Mb) global forecast at quarter-degree resolution at 3 hour intervals at hybrid
levels out to +189 hours. The complete forecast is split over multiple files. The forecast start time must be
selected. There are only 21 hours per file. Files with forecast hours beyond +84 are not available on NOMADS.
namf :: 12-km 3P +84h (1616 Mb) regional forecast for the entire domain at 12 km resolution every three hours
on pressure levels out to a +84 hours.
namsf :: 12-km 1H +48h (2636 Mb) regional forecast extracted for CONUS at 12 km resolution at one-hour
intervals on hybrid levels out to +48 hours.
namsf.AK :: 12-km 1H +48h (1523 Mb) regional forecast extracted for Alaska at 12 km resolution at one-hour
intervals on hybrid levels out to +48 hours.
namsf.HI :: 2.5-km 1H +48h (1030 Mb) regional forecast Hawaii nest at 2.5 km resolution at one-hour intervals
on hybrid levels out to +48 hours.
namsf.FW :: 1.5-km 1H +36h (1400 Mb) regional forecast Fire-Weather nest at 1.5 km resolution at one-hour
intervals on hybrid levels out to +36 hours (location, filesize, and horizontal resolution varies from cycle to
cycle).
namsf.{XX}tile :: 12-km 1H +48h (271 Mb) regional forecast extracted for the XX quadrant of the US at 12 km
resolution at hour intervals only for sigma surfaces below 0.6777 out to +48 hours. Quadrants are available for the
northeast (NE), southeast (SE), southwest (SW), and northwest (NW). No available on NOMADS.
namsf{HH}.CONUS :: 3-km 1H +6h (3314 Mb) regional forecast CONUS nest for forecast hour beginning at
HH+1h at 3 km resolution at one-hour intervals on sigma surfaces out to +6, +12, +18, +24, +30, +36, +42 hours
for files f00,f06,f12,f18,f24, f30,f36,f42 respectively. The forecast start time must be selected. There are only 6
forecast hours per file.
hrrrf :: 3-km 1S +18h (3418 Mb) for the CONUS is at 3 km resolution at one-hour intervals on sigma surfaces out
to +18. Updated every hour. The complete forecast is split over multiple files. The forecast start time must be
selected. Each file only contains 6h of data, except f18.
Additional information about the forecast data on the ARL server can be found at the ARL web page.

Meteorology / Data FTP / Appended Forecast

The appended forecast files consist of a special time extraction of the previous seven (-48 hr) forecast cycle files to
create a series of pseudo-analysis (0-hour initialization) and short-time (+3 h or +1,+2,+3,+4,+5) forecasts that have
been appended to each other to create a continuous data time series for the previous 48 hours in a single file. These
special time extracts are only available for the following files. For the NAM hybrid-level (nams) files the time period is
instead for the 24 hours prior to the forecast.

gfsa :: 1-deg 3P -48h (161 Mb)


nama :: 12-km 3P -48h (892 Mb)
namsa :: 12-km 1H -24h (1291 Mb)
namsa.AK :: 12-km 1H -24h (746 Mb)
namsa.HI :: 2.5-km 1H -24h (505 Mb)

Table of Contents
Meteorology / ARL Data FTP / Archive
The ARL web server contains several meteorological model data sets already converted into a HYSPLIT compatible
format on the public directories. Direct access via FTP to these data files is "hardwired" into the GUI. Only an email
address is required for the password to access the server. The "FTP menu" is locked until the FTP process is completed,
at which point a message box will appear indicating the file name that was just transferred.

The ARL analysis data archive consists of output from the Global Data Analysis System (GDAS/GFS) and the North
AMerican (EDAS/NAM) Data Analysis System. Data archives are available at various temporal and spatial resolutions.
See notes below for a more detailed discussion of each file.

EDAS :: 40 km 3P (>=2004 SM 650 Mb) Semi-Monthly (1-15; 16-end) data files at three hour intervals on
pressure surfaces
GDAS :: 1-deg 3P (>=2005 WK 600 Mb) Weekly files (W1=1-7; W2=8-14; W3=15-21; W4=22-28; W5=29-end)
every three hours on pressure surfaces
GDAS :: 0.5-deg 3S (September 1, 2007 - June 11, 2019 DA 500 Mb) Daily files every three hours on the native
GFS hybrid sigma coordinate system.
GFS.v1 :: 0.25-deg 3S (May 13, 2016 - June 12, 2019 DA 2500 Mb) Daily files every three hours on the native
GFS hybrid sigma coordinate system. NCEP GFS.v14
GFS :: 0.25-deg 3S (>= June 13, 2019 DA 2500 Mb) Daily files every three hours on the native GFS hybrid
sigma coordinate system. NCEP GFS.v15(fv3)
HRRR.v1 :: 3 km 1S (June 15, 2015 - July 23, 2019 DA 3500 Mb) Hourly HRRR output on sigma surfaces in
four files per day (00-05, 06-11, 12-17, and 18-23). The forecast hour is +1 for all times.
HRRR :: 3 km 1S (>=June 13, 2019 DA 3500 Mb) Hourly HRRR output on sigma surfaces in four files per day
(00-05, 06-11, 12-17, and 18-23). The forecast hour is +0 for all times.
NAM12 :: 12-km 3P (>=2007 DA 450 Mb) Composite archive 0 to +6 hour forecasts appended into daily files for
the CONUS at three hour intervals on pressure surfaces
NAMs :: 12 km 1S (>=2010 DA 1300 Mb) Composite archive 0 to +6 hour forecasts appended into daily files for
the CONUS at one hour intervals on sigma surfaces
NAMs-AK :: 12 km 1S (>=2010 DA 750 Mb) Composite archive 0 to +6 hour forecasts appended into daily files
for Alaska at one hour intervals on sigma surfaces
NAMs-HI :: 2 km 1S (>=2010 DA 500 Mb) Composite archive 0 to +6 hour forecasts appended into daily files
for Hawaii at one hour intervals on sigma surfaces
NARR :: 32 km 3P (>=1979 MO 3000 Mb) North American Regional Reanalysis on pressure surfaces at three
hour intervals in monthly files
WRF :: 2.5 deg 6P (>=1948 DA 220 Mb) NCAR/NCEP global reanalysis on pressure surfaces at six hour
intervals in monthly files
WRF :: 27 km 3S (>=1980 DA 220 Mb) Houly WRF output in daily files on sigma surfaces where initial and
boundary conditions were taken from the NARR

Depending upon how the data is archived, from the GUI it is necessary to enter at least the year and month. For other
data sets the day and hour (HRRR only) may be required. The file names are created automatically.

Global NOAA-NCEP/NCAR pressure level reanalysis data archives were reprocessed into the HYSPLIT compatible
format and are available on ARL's web site from 1948. More information about the reanalysis project and data can be
found at the CDC web site.

A comparable data set is also available from ECMWF's 40 year reanalysis project. The GRIB files must be downloaded
independently from the GUI through a web browser. Although cost free, registration with ECMWF is required. The
downloaded ECMWF GRIB data files may be converted through the GUI to the ARL format.

Additional information about the data archives on the ARL server can be found at the ARL web page.
Table of Contents
Meteorology / ARL Data FTP / Set Server
Meteorological data files formatted for use by HYSPLIT are available from several different FTP servers. The GUI
currently supports the definition of three different servers for forecast and analysis data. The default server locations and
their root data directories are written to the file default_ftp when the GUI is started for the first time. The FTP
application will only search for data at one location as shown in the text box. One may select either the default,
alternate, or backup sites. Not all data files are available at all sites. After selecting a new server, it is only necessary to
exit the GUI and the location in the text box will be used for all subsequent FTP commands.

It is possible to edit both the server location and root data directory and the exiting the GUI will cause subsequent FTPs
to use the edited location. However, if the edits are desired to be saved, then there are two save options. In the first, Save
Changes, just updates the internal array so that the edited text is saved in either the default, alternate, or backup
locations, so that pressing the radiobutton will bring up the edited locations rather than the original values for those
entries. In the Save to File option, the changes are saved to the internal array as well as the default_ftp file, so that when
the GUI is closed and reopened, the new values will be available.

Table of Contents
Meteorology / Convert to ARL / WRF-ARW
This menu tab is intended to convert WRF-ARW NetCDF output files to the ARL packed HYSPLIT compatible format.
Input files can have any name. The reformatted output files are named by default to ARWDATA.BIN. The output file
can be renamed through the GUI menu. Multiple time periods from a single file can be processed into one output file.
However, if single-time multiple files are to be processed, then each time period should be given a unique name and
then the files can be "cat" or "type" together into a single file if that is desired. Also note that the HYSPLIT NetCDF
decoder library is not available under Windows and this program will only work in a UNIX or MAC environment.

Table of Contents
Meteorology / Convert to ARL / Global Lat-Lon
This menu tab is intended to convert existing ECMWF or NOAA GRIB-1 files to the ARL packed HYSPLIT
compatible format. Subsequent file names will be constructed automatically between the starting and ending day at the
hour interval set by the radio-button. Day and hour are assumed to be the last four digits of the file name: {base}{DD}
{HH}. Input files could consist of multiple time period forecasts or single-time period archives. Time intervals of 12
hours or more should be avoided in transport and dispersion calculations. Only one month of data or less should be
processed into one output file. HYSPLIT cannot properly position to the correct starting data record if there is a month
transition included within a data file.

The input data may be on pressure surfaces or the native hybrid coordinate system. The "number of levels", counted
from the surface upwards, can be used to restrict the size of the output file. This may be particularly useful when the
input file consists of 60 or more hybrid levels.

Four different grid conversions are available through the menu. The "Extract" option interpolates the data to a conformal
map projection of 100x100 grid points at 100 km resolution at the center of the latitude and longitude selection of the
slider bar. The Northern- or Southern Hemisphere options create a 95-km resolution polar stereo-graphic grid centered
about the pole. The "Input" option results in no interpolation and the program just converts the original latitude-
longitude data (global or regional) to the ARL packed format.

The conversions from this menu are intended for data sets created by the operational ECMWF or NOAA forecast
models. These may be forecast or analysis data. Conversion of ECMWF reanalysis data is done through a different
menu.

Table of Contents
Meteorology / Convert to ARL / ECMWF ERA
This menu tab is intended to convert ECMWF reanalysis data archives (ERA-40) GRIB1 files to the ARL packed
HYSPLIT compatible format. Data files must first be downloaded from the ECMWF web page . The user can select
which fields and time periods to extract. A GRIB file may contain multiple time periods. No more than one month's data
should be included in a GRIB file if it is to be converted to ARL format. All the time periods from one input file will be
processed into one output file. The ECMWF download menu permits copying all the data on the global grid or
extracting regional sub-grids. The ARL conversion program can handle either option.

The conversion menu is designed to select three different GRIB file types: upper air data, surface data, and time
invariant data. The upper air and surface data files should contain data for the same time periods. To be able to run
HYSPLIT, the upper air data file must contain at a minimum the following variables: geopotential, temperature, u-
velocity, v-velocity, w-velocity, and relative humidity. The surface variable file should contain the 2m temperature, the
10m u- and v- velocity components, and the total precipitation field, if wet removal calculations are required. The
invariant file should contain the surface geopotential field (terrain height). Terrain height or surface pressure is required
for HYSPLIT to be able to interpolate pressure level data to its internal terrain following coordinate system.

The ECMWF input data may be on pressure surfaces or its native hybrid coordinate system. The "number of levels",
counted from the surface upwards, can be used to restrict the size of the output file. This may be particularly useful
when the input file consists of 60 or more hybrid levels. The rainfall summation time is the interval in hours at which
the rain "bucket" is emptied. For a pure forecast, the bucket is never emptied, for the interim data, the bucket is emptied
every 12 hours.

Four different grid conversions are available through the menu. The "Extract" option interpolates the data to a conformal
map projection of 100x100 grid points at 100 km resolution at the center of the latitude and longitude selection of the
slider bar. The Northern- or Southern Hemisphere options create a 95-km resolution polar stereo-graphic grid centered
about the pole. The "Input" option results in no interpolation and the program just converts the original latitude-
longitude data (global or regional) to the ARL packed format. The "Input" option must be used for latitude-longitude
sub-grids because interpolation from input data sub-grids is not supported.

Table of Contents
Meteorology / Convert to ARL / User entered
This menu tab is intended to convert user entered meteorological data to the ARL packed HYSPLIT compatible format.
The conversion program will create a data file with one or more time periods containing a uniform field in space and
height but varying in time. The grid is centered over the location specified in the main menu and covers a 250 by 250
km domain at a horizontal resolution of 10 km. It is intended to be used for very short-range simulations. The main
menu permits the selection of an existing text input file (space delimited format), or the creation of a file through the
menu shown below. After the meteorological data have been entered and saved to a file, the conversion program is run
from the "Run convert" menu button.

The data entry widget contains five time fields and four data fields. The output file time interval is computed
automatically from the input data. No interpolation is performed and the input data nearest in time to the output time are
used for the conversion. The maximum output interval is one hour. Note that the date-time field defaults to the current
system clock time. The meteorological variables are wind direction, speed (m/s), mixed layer depth (m), and Pasquill
stability category (1=A to 7=G). After filling in the data for the first line, use the Repeat button the fill the remaining
table entries and then edit the time and data fields as required.

The conversion program computes the component turbulent velocity variances based upon the stability category and
wind speed and assumes those values are constant with height within the mixed layer. Above the mixed layer top the
variances are set to the model's minimum value. Using these data for input the model should be configured in the
Advanced menu to use the variances for the dispersion. Otherwise the diffusion calculation will default to the
"deformation" method and with a spatially constant wind field; the dispersion will always be at its minimum value.
Table of Contents
Meteorology / Display Data / Check File
The program called through this menu is used to show the details of the meteorological data file contained in the index
record as well as a listing of the header portion (first 50 bytes) of each data record. The output is written to a text file
called chkfile.txt. More detailed information about the structure of a meteorological data file is available. The check file
program uses the same library routines as HYSPLIT. Therefore if this program does not work with the meteorological
data file, neither will HYSPLIT.

Table of Contents
Meteorology / Display data / Contour Map
The contour display program, run from this menu, provides a simple tool to create a contour map for any variable at any
time period in any ARL packed format meteorological data file. A simple configuration is available through the menu
shown below. More options are available if the program is run through the command line.

In the example shown above, the sample meteorological data file has been selected to display the temperature (TEMP)
variable on model level #2. The time period to be shown is offset 24 hours from the first time period in the file. No
additional time periods are shown when the increment is set to zero. The map will be centered about 40N 90W with a
radius of 20 degrees latitude. The maximum contour value and the temperature difference (Delta) between contours will
be determined automatically (-1.0). The radiobuttton default is to draw contour lines on top of the color fill.

For a latitude-longitude grid, the full grid cannot be plotted. Instead a sub-grid not centered at 0 deg latitude, 0 deg
longitude must be set.

There are two additional derived display options available. Selecting the variable "VECT" will produce a wind vector
plot and selecting "DIVG" will produce a plot of the wind field divergence. Both of these options require the U and V
velocity fields at the selected level in the order of U followed by V.

The radio buttons show only the most common set of variables found in the meteorological data files. The menu does
not know which variables are available in the selected file. The check file program can be used to examine the variables
contained in each file. The variable selected for display using the radio-button may be superseded by replacing the text
entry with the desired variable identification.

The output file contour.html is created in HTML format with scalable vector graphics. It is shown below, as displayed
by the web browser, and may be converted to GIF through the Convert SVG menu tab.
Table of Contents
Meteorology / Display Data / Text Profile
The profile program is used to create a profile at a specific location of the meteorological variables in an ARL format
packed data file. Output is written to the ASCII text file profile.txt. Note that the program does not interpolate to the
selected point, but processes the data at the grid point nearest to the location selected. In the example shown below the
sample meteorological data file has been selected to display the profile at 40N 90W. The time period to be shown is
offset 24 hours from the first time period in the file. No additional time periods are shown when the increment is set to
zero.

The output is written to a file called profile.txt and the following would be shown in the GUI's display window. The U
& V wind components shown on the left side are relative to the grid, while those on the right side are with respect to
north-south and east-west. They would be the same for a latitude-longitude grid.
Table of Contents
Meteorology / Display Data / Grid Domain
The showgrid program is used to show the domain of the ARL packed format meteorological grid file. Output is to an
HTML file and contains a plus symbol at every grid point intersection as selected by the integer plotting increment. A
non-zero value for the Lat-Lon interval draws latitude-longitude lines at that interval over the map background. The
example below is shown for the sample meteorological data file.

The following graphic is produced:

Table of Contents
Meteorology / Utilities / GIS to Shapefile
A utility tab is provided in the meteorology, trajectory, and concentration menus that will convert an ESRI generate
format text file to a Shapefile for import into ArcView or comparable GIS applications. The "ascii2shp" conversion
software is available under the GNU license from the Free Software foundation. The conversion can be run from the
command line or the GUI menu shown below.

Normally the fields will be blank when the menu is first invoked. Prior to creating the shapefiles it is necessary to create
the "Generate" text file which contains the latitude-longitude points of each contour as created by the contouring
programs available through the display menus. Checking the GIS box in the menu (-a1 option on the command line)
creates these files. All generate format files start with GIS and end with .txt. The remainder of the file name depends
upon the application from which it was created. However in all applications, the two-digit string prior to the .txt
identifies the frame number. There is one GIS output file per frame. Use the upper browse button to set the generate
format input file name.

The generate files can be converted to shapefiles consisting of points or lines (for trajectories), and lines or polygons
(for concentrations). Polygons are closed lines. A polygon that runs off the map display is becomes a line and may not
be displayed correctly if treated as a polygon.

Normally the base.dbf file will contain only the minimum amount of information. However, the trajectory and
concentration plotting programs automatically create an enhanced attributes file (GIS_???.att). To add this information
to base.dbf, check one of the enhanced attributes options.

In the lower text entry box enter the base name of the output shapefiles. All output files will be created in the working
directory. Note that depending upon the upper level menu from which the conversion was called, the shapefile
conversion will be either for lines (meteorology), points (trajectory), or polygons (concentration), although the option
can be superseded by the checkbox selection.

What is a Shapefile?

The Shapefile format is a working and interchange format promulgated by ESRI for simple vector data with attributes. It
is apparently the only file format that can be edited in ARCView 2/3, and can also be exported and imported in
Arc/Info. An excellent white paper on the shapefile format is available from ESRI, but it is .pdf format, so you will need
Adobe Acrobat to browse it. The file format actually consists of four files.
XXX.shp - holds the actual vertices.
XXX.shx - hold index data pointing to the structures in the .shp file.
XXX.dbf - holds the attributes in xBase (dBase) format.
XXX.prj - holds the projection information.

Copyright

The source for the Shapefile C Library is (c) 1998 Frank Warmerdam, and released under the following conditions. The
intent is that anyone can do anything with the code, but that I do not assume any liability, nor express any warranty for
this code.

As of Shapelib 1.2.6 the core portions of the library are made available under two possible licenses. The licensee can
choose to use the code under either the Library GNU Public License (LGPL) described in LICENSE.LGPL or under the
following MIT style license. Any files in the Shapelib distribution without explicit copyright license terms (such as this
documentation, the Makefile and so forth) should be considered to have the following licensing terms. Some auxiliary
portions of Shapelib, notably some of the components in the contrib directory come under slightly different license
restrictions. Check the source files that you are actually using for conditions.

Default License Terms

Copyright (c) 1999, Frank Warmerdam

This software is available under the following "MIT Style" license, or at the option of the licensee under the LGPL (see
LICENSE.LGPL). This option is discussed in more detail in shapelib.html. Permission is hereby granted, free of charge,
to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the
Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute,
sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject
to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the
Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR
A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Table of Contents
Meteorology / Utilities / Convert SVG
A utility tab is provided in the meteorology, trajectory, and concentration menu's that will convert the Scalable Vector
Graphic to another format. The menu shows the name of the last HTML file that was created. If that is not the file
desired, it should be replaced with the appropriate file name. All output files are assumed to reside in the working
directory and therefore a file from any menu can be converted from any of the conversion menu tabs. The file suffix
represents the conversion format. The slider bar determines the size of the output graphic in pixels per inch. The default
value of 70 will produce an output frame of comparable size to the input graphic. The checkboxes permit the creation of
a multiframe animation in one file or multiple output files if the "Frames" option has been checked. The "Crop" option
eliminates the white space around the graphic. However, the crop option may produce inconsistent results when used in
conjunction with the animation feature.

The conversion process uses the svgsplit program to read the HTML file and ImageMagick to convert that file to a
variety of other supported graphical formats. These programs can be stand-alone, but they have been linked through the
Tcl/Tk GUI menu. Conversion to GIF is the default. Proper installation of all the previously mentioned 3rd party
software is critical in the correct operation of the conversion process. This software is included with the HYSPLIT trial
version but not with the registered installation executable. Links to these programs are also available through the
HYSPLIT utilities web page.

If non-default directory wwas selected during the installation of ImageMagick, it may be necessary to manually edit the
directory location. The directory pointer can be changed from the "Advanced-Configuration-Directories" menu tab,
which modifies the file default_exec. In the rare situation when this file cannot be edited or does not exist because the
GUI did not load because of problems with Tcl/Tk, then the appropriate lines in the upper-level Tcl/Tk file
..\guicode\hysplit.tcl need to be changed. For instance, the line that might require editing would be similar to the
following:

set magick_dir "c:/Program Files (x86)/ImageMagick-6.4.4-Q8/convert.exe"

Table of Contents
Meteorology / Help / ARL Data Format
The following sections describe the ARL packed data format in a little more detail to permit the development of
customized applications. Library routines are provided to simplify the task of creating a model compatible data file. A
meteorological data file is composed of one or more time periods. Each time period begins with one or more ASCII
index records that summarize the valid time, the grid definition, the variables, and level information. Each subsequent
record contains one horizontal data field, consisting of 50 ASCII bytes of time, variable, and level information for that
record, followed by X times Y bytes of data, where X and Y are the number of data points in the horizontal and vertical
directions, respectively. Floating point or integer data are packed as one byte per variable. Precision is maintained by
packing the differences between adjacent grid points rather than packing the absolute values. In any time period,
although not required, the surface data precede the upper-level data fields. All records are of the same length to permit
the model to read the file in a "direct access" mode. Data files can be read on most computing platforms without any
transformation and appended to each other using routine operating system commands such as "cat" or "type". Only
binary transfers or copies are permitted. All the routines discussed in this section can be found in the source directory.

Valid Meteorological Data Types

Meteorological variables are identified to the model by a unique four-character identification field that is written to the
first 50-byte header portion of each data record. Some of the variables that can be decoded by the model and their units
are identified below.

Sample Surface Level Parameters

Description Units Identification

U-component of wind at 10 m m/s U10M

V-component of wind at 10 m m/s V10M

Temperature at 2 m K T02M

Bounday Layer Height m PBLH

Pressure at surface hPa PRSS

Pressure at mean sea level hPa MSLP

Temperature at surface K TMPS

Friction Velocity m/s USTR

Friction Temperature deg K TSTR

Surface Roughness m RGHS

U-Momentum flux N/m2 UMOF

V-Momentum flux N/m2 VMOF

Sfc sensible heat flux W/m2 SHTF

Latent heat flux W/m2 LTHF

Downward short wave flux W/m2 DSWF


Relative humidity at 2 m % RH2M

Specific humidity at 2 m kg/kg SPH2

Convective Available Potential Energy J/kg CAPE

Total cloud cover % TCLD

POSSIBLE PRECIP FIELDS

Total precipitation for whole dataset m TPPA

Total precipitation (24-h) m TPPD

Total precipitation (12-h) m TPPT

Total precipitation (6-h) m TPP6

Total precipitation (3-h) m TPP3

Total precipitation (1-h) m TPP1

Precipitation Rate (6-h) m/minute PRT6

Precipitation Rate (3-h) m/minute PRT3

Sample Upper-Level Parameters

Description Units Identification

U wind component (respect to grid) m/s UWND

V wind component (respect to grid) m/s VWND

Geopotential height gpm* HGTS

Temperature K TEMP

Pressure vertical velocity hPa/s WWND

Relative Humidity % RELH

Specific Humidity kg/kg SPHU

vertical velocity m/s DZDT

turbulent kinetic energy m2/s2 TKEN

Data may be obtained from any source, however there are certain minimum data requirements to run the model: surface
pressure or terrain height, u-v wind components, temperature, and moisture (RELH or SPHU).

It is highly recommended to include both surface pressure and terrain height. If only surface pressure is available, then
terrain height will either be estimated from the surface pressure or read from the TERRAIN.ASC file (see also the
TERRFLG parameter in SETUP.CFG). If only terrain height is available, then surface pressure will be estimated from
the terrain height.
precipitation is required for wet removal calculations.

Not required, but necessary to improve vertical mixing calculations, is some measure of low-level stability. This may
take the form of a near-surface wind and temperature or the fluxes of heat and momentum.

It is also important to have sufficient vertical resolution in the meteorological data. Some of the NOAA higher
resolution data files have five or more levels in the boundary layer (<850 hPa) in addition to wind and temperatures near
the surface, usually at 2 and 10 m agl. These surface values are especially important when the data are only available on
absolute pressure surfaces, rather than terrain following sigma surfaces, to avoid interpolation to the surface between
data levels when the local terrain is well above sea-level.

Starting with HYSPLIT (Revision 596) the optional field DIFF is recognized as the difference between the original data
and the packed data: DIFF=ORIGINAL-PACKED. The effect of this is to increase the precision of variables that have
this additional field. When the DIFF field is read by HYSPLIT, the values are added to the data field resulting in values
closer to the original data. Currently only DIFW and DIFR (vertical velocity and precipitation) are recognized as valid
DIFF fields.

Creation of a Meteorological Input Data File

One may prepare meteorological data from any number of different sources to be in a suitable format for the model
using a series of routines described in this section. In general it is assumed one has access to a meteorological data
source, either the data fields are already on a grid, such as those output from a meteorological model, or perhaps from
observations that have been interpolated to a grid. Some example conversion programs are provided within the GUI to
convert from either NOAA or ECMWF GRIB format data files to the HYSPLIT format.

The meteorological data are processed in time-sequence, calling the subroutines provided, to create a HYSPLIT
compatible output file. These subroutines will pack the data and write the index record. The index record, which
precedes the data records for each time period, can only be written after the data records are processed. The packing
routines must first be initialized by setting the appropriate grid parameters and defining all the meteorological variables
that will be written to the file. Multiple output grids may be defined and written simultaneously by invoking the
PAKSET routine with a different unit number for each grid. The grid parameters are all defined in a configuration file,
which should be in the directory from which the procedure is invoked:

CALL PAKSET (kunit,fname,krec,nx,ny,nz)

Kunit is the Fortran unit number to which the data records will be written. Fname is the character string of the name of
the configuration file. It can be any name, but it will default to metdata.cfg. The file is opened internally on kunit to read
the configuration file. This routine needs to be called once for each grid. Krec is the starting record number (of the index
record) to which output will be written. It is normally set to 1 unless you want to start writing in the middle of a file. The
remaining parameters (nx, ny, nz) are returned by the subroutine and define the horizontal and vertical grid dimensions.
These values can be used to set variable dimensions. It is your responsibility to open kunit for output after having
completed the pakset calls:

OPEN(file=myfile, unit=kunit, form='unformatted', access='direct', recl={50+nx*ny})

The individual data records are packed and written by a call to PAKREC, once for each variable at each level. The
routine calculates the record offset from the index record according to the variable and level information provided in the
arguments and writes the record according to the order specified in metdata.cfg. The data can be supplied in any order.
Note that although the level indicator (LL) goes from 1 to the number of levels, one is subtracted before it is written to
the 50 byte header to be consistent with the definition of surface data to be at level "0". All the records in a time period
may be initialized according to the value of the kini flag. Initialization fills in the time variable for all records and
assigns the variable identification field as null.

CALL PAKREC (kunit,rvar,cvar,nx,ny,nxy,kvar,iy,im,id,ih,mn,ic,ll,kini)


kunit - integer unit number of the defined file
rvar - input array of real*4 data values
cvar - character*1 array of packed data values
nx,ny - integer horizontal grid dimensions
nxy - integer product nx*ny and length of cvar
kvar - character*4 descriptor of variable being written
iy,im - integer year and month
id,ih - integer day and hour
mn - integer minutes (usually 0)
ic - integer forecast hour (hours from initialization)
ll - integer level indicator (1 to NZ)
kini - integer initialization flag (0-no; 1-yes)

When all the data records for a time period have been written, it is necessary to close that time period by writing its
index record:

CALL PAKNDX(kunit)

At this point your program can return to PAKREC if data records for additional time periods are to be added to the file.

The key to the process is creating the proper configuration file for the data set you want to create. Some of the sample
data decoders provided dynamically configure the packing configuration file based upon the command line input
information. A sample metdata.cfg file for NOAA's global model output is shown below. An extract of the global one-
degree latitude-longitude GRIB model data have been interpolated to a 100 km resolution Lambert Conformal
projection 100x100 point grid centered about 45N 90W. The configuration file format is such that the first 20 characters
are a dummy identification field followed by the data.

Example Meteorological Packing Configuration File

Col 1 (A20) .... Col 21 (A4,2I4,12F10,3I4)


Data Source........ GFSX
Grid Number........ 99
Z-Coordinate....... 2
Pole Latitude...... 45.0
Pole Longitude..... -90.0
Reference Latitude. 45.0
Reference Longitude -90.0
Reference grid size 100.0
Orientation........ 0.0
Tangent Latitude... 45.0
Synch point X...... 50.5
Synch point Y...... 50.5
Synch Latitude..... 45.0
Synch Longitude.... -90.0
Reserved........... 0.0
Number X points.... 100
Number Y points.... 100
Number of Levels... 16

Format: F6.,I3,(1X,A4)

Level 01:.0000 6 PRSS MSLP TPP6 U10M V10M T02M


Level 02:1000. 6 HGTS TEMP UWND VWND WWND RELH
Level 03: 975. 6 HGTS TEMP UWND VWND WWND RELH
Level 04: 950. 6 HGTS TEMP UWND VWND WWND RELH
Level 05: 925. 6 HGTS TEMP UWND VWND WWND RELH
Level 06: 900. 6 HGTS TEMP UWND VWND WWND RELH
Level 07: 850. 6 HGTS TEMP UWND VWND WWND RELH
Level 08: 800. 6 HGTS TEMP UWND VWND WWND RELH
Level 09: 750. 6 HGTS TEMP UWND VWND WWND RELH
Level 10: 700. 6 HGTS TEMP UWND VWND WWND RELH
Level 11: 600. 6 HGTS TEMP UWND VWND WWND RELH
Level 12: 500. 6 HGTS TEMP UWND VWND WWND RELH
Level 13: 400. 6 HGTS TEMP UWND VWND WWND RELH
Level 14: 300. 6 HGTS TEMP UWND VWND WWND RELH
Level 15: 200. 6 HGTS TEMP UWND VWND WWND RELH
Level 16: 100. 6 HGTS TEMP UWND VWND WWND RELH

It is important that the information contained in this file is correct because it not only controls the writing of the packed
meteorological data file, but much of the information is written into the index record of each time period. The model
decodes this information to set up the internal processing of the meteorological data. Starting with HYSPLIT V4.5, the
model is also capable of using meteorological data on a latitude-longitude grid. Previous versions were limited to data
on a conformal map projection. Data on a regular latitude-longitude grid still need to be converted to the ARL packed
format. Modifications to the packing configuration file required to support a latitude-longitude grid are noted below. A
complete description of metdata.cfg format follows:

Description of the Meteorological Packing Configuration File

Record 1 consists of a four-character string that identifies the source of the meteorological data. This string will be
passed through to many of the output graphics.

Record 2 is an optional integer identification of the meteorological data grid. It was used extensively in previous
meteorological data file formats. It is not used in Hysplit applications.

Record 3 is an integer number that identifies the vertical coordinate system. Only four coordinate types are recognized:
1-pressure sigma; 2-pressure absolute; 3-terrain sigma; 4-hybrid sigma.

Records 4 & 5 identifies the pole position of the grid projection. Most projections will either be defined at +90 or -90
depending upon the hemisphere. The longitude would be the point 180 degrees from which the projection is cut. Lat-
Lon Grids only: contains the latitude and longitude of the grid point with the maximum grid point value. Note that lat-
lon grids grids should be defined with reference to the dateline.

Records 6 & 7 is the reference position at which the grid spacing is defined. Lat-Lon Grids only: contains the grid
spacing in degrees latitude and longitude.

Record 8 is the grid spacing in km at the reference position. Lat-Lon Grids only: a value of zero signals that the grid is a
lat-lon grid.

Record 9 is the grid orientation or the angle at the reference point made by the y-axis and the local direction of north.
Lat-Lon Grids only: value always = 0.

Record 10 is the angle between the axis and the surface of the cone. For regular projections it is equal to the latitude at
which the grid is tangent to the earth's surface. A polar stereo-graphic grid would be tangent at either 90 or -90, while a
Mercator projection is tangent at 0 latitude. A Lambert Conformal projection would be in between the two limits. An
oblique stereo-graphic projection would have a cone angle of 90. Lat-Lon Grids only: value always = 0

Records 11 & 12 are used to equate a position on the grid with a position on the earth as given in the following two
records:

Records 13 & 14. In this example, the position indicated is the center of the grid located over the North Pole. Any
position is acceptable. It need not even be on the grid. Lat-Lon Grids only: contains the latitude and longitude of the 1,1
position grid point.

Record 15 is not currently used.

Records 16 & 17 identify the number of grid points in each direction.

Record 18 is the number of levels in the vertical, including the surface level.

Record 19, through the number of levels, identifies the height of each level in appropriate units according the definition
of the vertical coordinate, the number of variables at that level, and the four character identification string for each
variable. The height units are as follows for each coordinate:

1-sigma (fraction)
2-pressure (mb)
3-terrain (fraction)
4-hybrid (mb: offset.fraction)

Decoding Meteorological Data Files

One may want to develop other applications for HYSPLIT compatible meteorological data files. For these situations,
some lower-level routines are provided in the source code library. The key to reading the meteorological files is
decoding the index record. The format for this record is given below. Complete descriptions are similar to the variables
in the discussion above.

FORMAT DESCRIPTION

(A4) Data Source


(I3) Forecast hour (>99 the header forecast hr = 99)
(I2) Minutes associated with data time
(12F7) 1- Pole Lat, 2- Pole Lon, 3- Tangent Lat, 4- Tangent Lon, 5- Grid Size, 6- Orientation, 7- Cone Angle, 8-
Xsynch point, 9- Ysynch point, 10- Synch point lat, 11- Synch point long, 12) Reserved
(3I3) 1) Number x points, 2) Number y points, 3) Number levels
(I2) Vertical coordinate system flag
(I4) Length in bytes of the index record, excluding the first 50 bytes

LOOP through the number of data levels

(F6) height of the first level


(I2) number of variables at that level

LOOP through the number of variables

(A4) variable identification


(I3) rotating checksum of the packed data
(1X) Reserved space for future use

Once the index record has been read and decoded you have sufficient information to read and decode the data records. A
data un-packer is provided to convert the packed character*1 array to a real*4 array. It can also be used to extract a sub-
grid from the full domain through specification of the sub-grid lower left corner:

CALL PAKINP (rvar,cvar,nx,ny,nx0,ny0,lx,ly,prec,nexp,var1,ksum)

rvar - real output array of integer dimensions lx,ly


cvar - character*1 packed input array of length nx*ny
nx,ny- integer dimensions of the full grid
nx0 - integer sub-grid position of left edge in nx units
ny0 - integer sub-grid position of lower edge in ny units
lx - integer first dimension of sub-grid length
ly - integer second dimension of sub-grid length
prec - real precision of packed data array
nexp - integer scaling exponent of packed data array
var1 - real value of array at position (1,1)
ksum - integer rotating checksum of packed data array

If the entire grid is to be unpacked then nx0=ny0=1 and nx=lx, ny=ly. The checksum (ksum) that is returned should be
compared with the corresponding value in a table generated from reading the index record. If you are not going to
compare the checksum, set ksum = -1, this will save a little computer time. Due to the sub-grid option the checksum
cannot be computed in the regular unpacking loop, but requires a second pass through the data. The checksum pass is
enabled when ksum=0. It will then return a non-zero value. If you don't reset it to zero, no further checksums will be
computed.

If you want to create your own packed data by converting a real*4 array to the character*1 packed data array use the
following:

CALL PAKOUT(rvar,cvar,nx,ny,nxy,prec,nexp,var1,ksum)

Although the structure of the packed data may seem complex, unpacking is a very simple process, the basic elements of
which are summarized in the Fortran code shown below. The value of each element is the sum of the initial value and
the difference stored in that element location.

SUBROUTINE UNPACK(CPACK,RVAR,NX,NY,NXY,NEXP,VAR1)

CHARACTER*1 CPACK(NXY)
REAL*4 RVAR(NX,NY)
SCALE=2.0**(7-NEXP)
VOLD=VAR1
INDX=0
DO J=1,NY
DO I=1,NX
INDX=INDX+1
RVAR(I,J)=(ICHAR(CPACK(INDX))-127.)/SCALE+VOLD
VOLD=RVAR(I,J)
END DO
VOLD=RVAR(1,J)
END DO
RETURN

Table of Contents
Trajectory / Help
The trajectory menu tab is composed of five main sections: setting up the simulation, executing the model, displaying
the trajectory, converting the graphic to other formats, and running special simulations. The model is configured and
executed through the menu and results displayed. However for experienced users, each component could be run
independently from the command line and the CONTROL file could be created using any text editor.

In the Trajectory Setup menu the entire purpose of the GUI is to configure the model's input CONTROL file. This is a
text file that configures the simulation parameters. Once the input parameters are set to their desired value, the model is
executed from the Run Model menu tab. When complete, the output window is closed and the Trajectory Display menu
is used to draw and display the trajectory from the endpoints output file. The Special Runs menu is used to configure
several different customized simultaneous multiple trajectory simulations.

For inexperienced users, a review of the Quick Start Help is highly suggested, which goes through the trajectory
computation step-by-step using the example meteorological data file.

Table of Contents
Trajectory / Quick Start Help
After the initial installation, the model will be configured to run the example case discussed in more detail below. The
"Quick Start" menu tab can be used to run the example or the previous simulation in one-step. For more detailed
simulation configurations, follow the steps below.

The easiest way to run the model is to use the GUI menu to create the model's input CONTROL file. For the purposes of
this demonstration appropriate meteorological files are provided. If for some reason the menu system is not available,
perhaps because Tcl/Tk was not installed, the Control file can be created manually

Step 1 - start the GUI menu system using \working\hysplit.tcl or the desktop shortcut to Hysplit. A widget will appear
with the HYSPLIT graphic and three button options: Menu, Help and Exit.Click on Menu tab.

Step 2 - The four main menus of the Hysplit GUI will appear: Meteorology, Trajectory, Concentration, and Advanced.
An additional small widget underneath the main menu gives the current Hysplit version information. Do not delete this
widget as it will terminate the GUI. It provides the reference frame for the model's standard output and messages. Click
on the Trajectory tab.

Step 3 - Five options appear under this item: Setup Run, Run Model, Display, Utilities, and Special Runs. Normally
these are run in sequence, however any item can be selected and run if the appropriate input files were created during a
previous simulation. Click on the Setup tab.

Step 4 - Setup Run is used to enter the basic model simulation parameters: the starting time of the calculation; starting
location in terms of latitude, longitude, and height; the run-time or duration of the trajectory calculation; and the names
and locations of all required files. When modifications to this menu are complete, click on Save. However for this
example, you will use the Retrieve option for predefined configurations, so do nothing here and go on to Step 5.

Step 5 - The example calculation is configured by clicking on Retrieve and then entering the text: sample_traj, which
is the name of the example simulation control file that was created for this demonstration. Then click on OK. After the
data entry widget is closed, click on Save and the setup menu will close.

Step 6 - Click on Run Model, which first copies the setup configuration (default_traj) saved in the previous step, to the
model's input CONTROL file. The model calculation is then started. A series of messages will appear on standard output
text window showing the progress of the calculation. When the simulation is completed, the trajectory end-points output
file is ready to be converted for graphical display. Under some operating systems, the standard output widget will not
show any output until the end of the calculation and the Trajectory menu items will be locked until the calculation
completes.

After completion click on Exit to close the window.

Step 7 - Selecting Display will run a special program that converts the text file of trajectory end-point positions into a
high quality SVG file (trajplot.html) suitable for printing. The conversion widget provides options for the frequency of
the labels on the trajectory, a variable zoom factor, and color or black and white graphics options. If a web browser has
been installed and associated with the .html file suffix, then it will be invoked by the GUI. If the web browser does not
automatically open, it may be necessary to manually edit ../guicode/hysplit.tcl to change the directory location
associated with the web browser program. After clicking on Execute Display, the following graphic will appear in the
web browser window:
Table of Contents
Trajectory / Saving and Retrieving
The trajectory and concentration setup menus for the CONTROL file permit a Save As or Retrieve option. Each of these
menus is shown in the panel below. After a simulation has been configured, run, displayed, and assuming that the results
are satisfactory and no more changes are required, then go back to the setup menu tab and Save As the control file by
giving it a unique name, such as newname_traj in the example below. There are no restrictions on the naming
convention. If the simulation is to be rerun, perhaps with some minor variation, the Retrieve option is used to load those
values back into the setup menu.

Table of Contents
Trajectory / Run Model
Once the Trajectory Setup menu has been closed with the Save button, the changes to the simulation parameters are
copied to default_traj. Clicking on the Run Model menu tab first copies default_traj to CONTROL and then runs the
trajectory model executable, hyts_std{.exe}. The executable, by default, attempts to open a file named CONTROL to
read all the required input parameters. If not found, the model will prompt to standard output for values from standard
input. This condition should not occur running the model through the GUI. When the model execution starts, output
messages are written to a special window, an example of which is shown in the illustration below:

Successful completion of a simulation will show a similar message. Additional run-time diagnostic messages and other
error messages are always written to a file called MESSAGE. This file may be viewed through one of the Advanced
Menu tabs. Depending upon the nature of the error message, perhaps a failure in the model initialization process, error
messages may also appear in above window. Once the model has completed, press Exit to close the window.

Table of Contents
Trajectory / Display / Trajectory
The trajectory model generates a text output file of end-point positions. The end-point position file is processed by
trajplot to produce the Postscript display file. Trajplot can be accessed through the GUI or run directly from the
command line. The display program has a variety of command line options, most of which are available through the
GUI. There is a one-to-one relationship between GUI options, an example of which is shown below, and the command
line options. There are several features particular to the GUI. First the Trajectory Display menu will not open unless the
Trajectory Setup menu was first opened and Saved. This procedure sets all the GUI parameters to their default_traj
values. These are the values shown in the Display menu. Normally the map projection is computed automatically based
upon the location and length of the trajectory. However some trajectory combinations may give no maps or improperly
scaled maps. In these situations one would want to over-ride the default projection and try forcing a selection.

Trajplot has an additional implementation in the Python language. By default, the GUI uses trajplot built from
FORTRAN code. To use the Python implementation, select the Python tab near the bottom of the user interface before
generating a plot.

Trajplot Command Line Options

The Postscript conversion program trajplot reads the trajectory endpoints output file, calculates the most optimum map
for display, and creates the output file - trajplot.ps. When executed from the command line, there are several Unix style
command line options. There should be no space between the option and any arguments: trajplot -[options (default)]

-a[GIS Output: 0-none, 1-ESRI Generate, 3-Google Earth]

Selecting the ESRI Generate format output creates an ASCII text file for each output time period that consists of
the value and latitude-longitude points along each trajectory. This file can be converted to an ESRI Shapefile or
converted for other GIS applications through the utility menu "GIS to Shapefile. The view checkbox would be
disabled to do just the GIS conversion without opening the Postscript viewer. Selecting Google Earth will create a
compressed file (*.kmz) for use in Google Earth; a free software package to display geo-referenced information
in 3-dimensions. You must have the free Info-Zip file compression software installed to compress the Google
Earth file and associated graphics. The Python implementation can take several formats. See the description for
the --more-gis-options option below.

-e[End hour to plot: #, (all hours)]

-f[Frames: (0)-all frames in one file 1-one frame per file]

= 0 - all files plotted on one frame


= 1 - one input file per output frame

-g [Circle overlay: ( )-auto, #circ (4), #circ:dist(km)]

= {blank} - draws four distance rings about source point, automatically scaled
= {#} - draws the indicated number of rings about the source point
= {#1}:{#2} - draws the number {#1} of rings about the source point each {#2} kilometers apart. In the special
case where #1 is zero, the rings are not drawn, but the size of the plot is scaled as if one ring of radius {#2} is to
be drawn.

+g[Graphics type: (0)-Postscript, 1-SVG]

= 0 - Output in Postscript.
= 1 - Output in HTML containing Scalable Vector Graphics.
Note that this option is unavailable in the Python version.

-h [Hold map at center lat-lon: (source point), lat:lon]

= latitude:longitude - forces the map center to be at the selected location instead of the source point. In
conjunction with the -g0:{km} option it is possible to create a constant map projection for each execution.

-i[Input files: name1+name2+name3+... or +listfile or (tdump)]

= tdump - default file name when the -i option is undefined.


= {user defined file name}
= {name1+name2+...} - to overlay multiple tdump files on the same plot based upon file name1 plot limits.
= +filename - filename is a file of filenames to be plotted following the same convention as the previous argument
(limit <=5000 files).

-j[Map background file: (arlmap) or shapefiles.(txt|suffix)]


= arlmap - default ASCII file
= shapefiles.(txt|suffix) - file of shapefile names

-k[Kolor: 0-B&W (1)-Color, N:colortraj#1,...colortraj#N]

= 0 - for black and white output


= 1 - for color differentiation of multiple trajectories
= N - 1=red,2=blue,3=green,4=cyan,5=magenta,6=yellow,7=olive

-l[Label interval: -12, -6, 0, (6), 12, ... hrs]

= 0 - for no labels along the trajectory


= 6 - for labels every 6 hours
= 12 - for labels every 12 hours
= {X} - positive at synoptic times
= {-X} - negative with respect to traj start

-L[LatLonLineLabels]

= 0 - none
= 1 - auto
= 2:tenths - at interval specified

-m[Map projection: (0)-auto 1-polar 2-lambert 3-mercator 4-CylEqu]

-o[Output file name: (trajplot.ps)]

= trajplot.ps is the default Postscript output file name, otherwise it may be {user defined}. The Python
implementation supports several formats beside postscript. See the description for the --more-formats option
below.

-p[Process file name suffix: (ps) or process ID}

-s[Symbol at trajectory origin: 0-no (1)-yes]

-v[Vertical: 0-pressure (1)-agl 2-isentropic 3-meteorology 4-none]

= 0 - The vertical trajectory plot coordinate is in hPa (absolute pressure units).


= 1 - The vertical trajectory plot coordinate is meters above ground level.
= 2 - An isentropic coordinate display requires the trajectory to have been run in those coordinates.
= 3 - Forces the display of one of the selected meteorological variables along the trajectory rather than a trajectory
vertical position coordinate. The meteorological variable is selected in the "Trajectory" tab of the Advanced
Configuration menu. Only one meteorological variable can be selected for display. If multiple variables are
selected, then only the last variable will be plotted.
= 4 - No vertical projection drawn.

-z[Zoom factor: 0-least zoom, (50), 100-most zoom]

= 50 - for standard resolution maps.


= 100 - for high resolution maps (maximum zoom)

Additional Trajplot Command Line Options for Python Implementation

The following options are available only to the Python trajplot.


--debug

Print debug messages. This is useful for developers to diagnose an issue.

--interactive

Enter interactive mode. Users can zoom in or move the plot area.

--more-formats=f1[,f2,...]

Specify one or more additional output format(s). This option supplements the output format specified by the -o
option. For example, for -oa.ps --more-formats=pdf,png, three files would be produced, namely, a.ps, a.pdf, and
a.png. Supported formats are eps, jpg, pdf, pgf, png, ps, raw, rgba, svg, svgz, and tif.

--more-gis-options=f1[,f2,...]

Specify one or more additional GIS output format(s). This option supplements the -a option. For example, with -
a1 --more-gis-options=3, both ESRI Generate and Google Earth files will be created.

--source-time-zone

Display dates and times as a local time at the source location. If the option is not given, dates and times will be in
Coordinated Universal Time (UTC).

--street-map[=n]

Show street map in the background. Currently, the option value n may take 0 (TERRAIN) or 1 (TONER). If no
option value is used (i.e., --street-map), n = 0 will be used. This option overrides the -j option.

--time-zone=tz

Show dates and times as a local time at the given time zone tz. The time zone should be listed in the pytz Python
package. For example, it could be US/Eastern, America/New_York, Etc/GMT-5, and so on.

Additional Map Label Customization

Many of the Postscript graphics programs have label information that can be customized to some extent. This is
accomplished by placing a file called Labels.cfg in the startup directory which contains the following valid entries (all in
single quotes terminated by &) replacing the new string with the desired text. A sample file called Labels.bak may be
found in the relevant directory. Not all label strings are valid with every plotting program. For instance with trajplot,
only the title entry would be used to replace the top label line of the plot.

'TITLE&','NATIONAL WEATHER SERVICE&'


'MAPID&','PROBABILITY&'
'LAYER&','BOUNDARY LAYER AVERAGE&'
'UNITS&','BQ&'
'VOLUM&','/M3&'

Additional supplemental text may be added at the bottom of the graphic by creating a file called MAPTEXT.CFG, also
to be located in the startup directory. This is a generic file used by all plotting programs but each program will used
different lines in its display. The file can be created and edited through the Advanced menu tab.

Standard ASCII Map Background File

By default, all mapping programs use the same text map background file, arlmap, which normally would be located in
the ../graphics directory. However, all graphics programs first search the local start directory (../working if running the
GUI), then the ../graphics directory. Customized map background files can be read instead of the default file for
specialized applications. Some higher resolution map background files are available from the HYSPLIT download web
page. These different map files may be accessed implicitly by putting them in the "../working (the startup directory), or
explicitly through the GUI by entering the name of the customized map background file. The map background file
format consists of three or more records per graphic line with the following format:

2I5 - line number, number of latitude-longitude points in the line


10F6.2 - latitude points in line
10F7.2 - longitude points in line

ESRI Shapefile Map Background Files

Another mapping option would be to specify a special pointer file (originally called shapefiles.txt, but now a suffix other
than "txt" is permitted) to replace the map background file arlmap in the -j command line option (see above). Note -
jshapefiles... rather than -j./shapefiles... is required. This file would contain the name of one or more shapefiles that can
be used to create the map background. The line characteristics (spacing, thickness, color) can be specified for each
shapefile following the format specified below:

Record format: 'file.shp' dash thick red green blue


file.shp = /dir/name of input shapefile in quotes
dash = {0} for solid; {dashes}/in; <0 color fill
thick = line thickness in inches (default = 0.0)
Red Green Blue = RGB values (0.0 0.0 0.0 is black)
Record example for default: 'arlmap.shp' 0 0.005 0.4 0.6 0.8

Table of Contents
Trajectory / Display / Frequency
Multiple trajectory files can be displayed by creating an arbitrary grid over the computational domain and then counting
the number of trajectory intersections over each grid cell and normalizing by the total number of trajectories (or
endpoints, as discussed in the following). A trajectory may intersect a grid cell once or multiple times (with residence
time options 1, 2, or 3). With the default residence time (RT) option (0), the number of trajectories that intersect each
grid cell are divided by the total number of trajectories. With RT=1, the number of endpoints in each grid cell are
divided by the total number of trajectories (in this case, there can be values over 100%). With RT=2, the number of
endpoints in each grid cell are divided by the total number of endpoints. With RT=3, the number of endpoints in each
grid cell are divided by the maximum number of endpoints in any cell. Each trajectory is defined by one output file
which is only permitted to contain one trajectory. Trajectory output file names begin with the base name defined in the
setup menu tdump and are then followed by some arbitrary identification text (such as the date) that is created by the
Run Daily menu option. The trajectory endpoint files should reside in the working directory. The first step in the
processes is to create a file of trajectory file names which will always be called INFILE. The text entry box defines the
trajectory base name which is treated as a wildcard string when the filenames are searched and added to the contents of
INFILE. This file may be manually edited to remove any trajectory files that are not required for the frequency analysis.
Then select the grid resolution according to the scale of the desired result. Too fine a grid with too few trajectories can
result in poorly contoured display. The gridded binary output file is in the HYSPLIT concentration format. The
concentration plotting routine is used to display the trajectory frequency results and hence some of the label options are
inconsistent with this particular application. The trajectory frequency results could be exported to an ASCII file using
one of the concentration convert utility options or displayed using one of the GIS formats.

Table of Contents
Trajectory / Utilities / Endpoints to IOAPI
This menu calls a utility program to convert the HYSPLIT ASCII trajectory endpoint file to the IOAPI ID_referenced
data format. Multiple trajectories and meteorological data values along the trajectory are supported. The conversion
program is currently only available on UNIX or LINUX systems and will convert all the endpoints for one or more
trajectories in a file in one pass. The maximum number of trajectories and variables per trajectory are subject to
compilation limits. The data files follow no specific naming convention and any input or output file name may be
selected in the menu.

Table of Contents
Trajectory / Special Runs
The Special simulations menu tab is required because certain options may require a different executable file,
modifications to the Control file that are not supported by the GUI, or interactions with other items under the Advanced
Menu tab. More information is provided below for each special simulation. Some Special Simulations may not be
available for all operating systems.

Test Inputs

This menu calls a program called HYSPTEST, which is a simplified version of HYSPLIT that will read the various input
files such as CONTROL and SETUP.CFG to determine if many of the user options are correctly or optimally
configured. The program opens all the meteorological data files. No output files are created except MESSAGE and
WARNING files. Both trajectory and dispersion input files can be read, however only limited testing is conducted with
a trajectory calculation. Standard analysis messages are written to MESSAGE_mod. When CONTROL file or
SETUP.CFG file changes are suggested, these changes will be summarized to the file WARNING_mod. The modified
(or unmodified) input files are written to CONTROL_mod and SETUP_mod.CFG. The suggested changes can be
loaded back into the GUI variables by retrieving CONTROL_mod and SETUP_mod.CFG into their respective menus.

Ensemble

The ensemble form of the model automatically starts multiple trajectories from the selected starting point. Each member
of the trajectory ensemble is calculated by offsetting the meteorological data by a fixed grid factor as defined in the
Advanced Trajectory Configuration tab. The default offset is one meteorological grid point in the horizontal and 0.01
sigma units in the vertical. This results in 27 members for all-possible offsets in X, Y, and Z. After the model
calculation has completed, use the normal Trajectory Display tab to view the results. Because the offset is computed in
both directions in the vertical from the starting location, a starting location at the ground would not provide an optimal
configuration for this type of simulation. The default vertical offset is about 250 m. Therefore this should be the
minimum starting height for ensemble trajectories, unless the default offset is changed in the namelist file. An example
of the ensemble trajectory using the example meteorological data is shown in the illustration below.
Matrix

The matrix calculation is a way to set up the CONTROL file for multiple starting locations that may exceed the GUI
limit of 6 under the Trajectory Setup tab. Hundreds or thousands of starting points may be specified. The Run Matrix tab
just executes the model using a CONTROL file that is dynamically created from the default_traj file using a special
program called latlon that is called from within the GUI. The program reads a CONTROL file that is required to have
three starting locations and then re-writes the same file with multiple locations. The multiple starting locations are
computed by the latlon program based upon the number of starting points that fall in the domain between starting point
1 and starting point 2. Each new location is offset by an amount equal to the difference between starting locations 1 and
3. An example would be more illustrative. If the original control file has three starting locations: #1 at 40N, 90W, #2 at
50N, 80W, and #3 at 41N, 89W; then the matrix processing results in a CONTROL file with 121 starting locations, all
one degree apart, all between 40N 90W and 50N 80W.
The reason for this approach is that only the CONTROL file is modified and not default_traj. The GUI never reads
CONTROL, therefore the file does not need to conform to the GUI limits. The final matrix trajectory using the example
meteorological data for this configuration is shown in the illustration below.

Multiple Trajectories in Time

In the normal simulation configuration, all trajectories start at time that was defined in the first line of the Trajectory
Setup menu. Trajectories starting at a different time would require an independent simulation. However, there is
shortcut to permit the calculation of multiple trajectories in time from the same starting location. Setup the simulation
for a single trajectory then go to the Advanced Trajectory configuration menu. Under the Temporal Trajectory Restart
line, edit the temporal interval in hours from the default of zero (no restarts) to the desired value. For example, if this
value is set to 6 hours (corresponding to NSTR=6 in the namelist file SETUP.CFG), then in addition to the normal
trajectory that starts at the initial time, new trajectories would be started every six hours for the duration of the
simulation. The result of this simulation, using the example meteorological data, is shown in the illustration below.

Multiple Trajectories in Space and Time

The standard setup options of the GUI permit the calculation of multiple trajectories in space or height by simply
specifying new locations in the CONTROL file. A variation of that procedure was shown above with respect to setting
up a matrix of starting locations. The model also permits the start of new trajectories at different locations along an
existing trajectory. This can be considered a special case of multiple trajectories-in-time because when starting a new
trajectory along an existing trajectory, the new starting location differs in time and space from the original trajectory. In
this variation, new trajectories are started at multiple levels at the temporal restart interval. The multiple restart levels
are assumed to be the same multiple starting heights specified in the original setup and therefore the number of restart
heights must equal the number of starting locations. Go the Advanced Configuration menu and setup the example as in
the previous case to restart trajectories every 6 hours. Then change the Number of Levels parameter from its default
value of zero (no new trajectories) to the number of new trajectories will be started. For example, set the number of
levels to 2 (sets the NVER=2 parameter in the SETUP.CFG namelist file). The number of starting locations should also
be set to two, at the same location, but with release heights of 10 and 1500 m. Two trajectories, at 10 and 1500 m, will
start every 6 hours. The result is shown in the illustration below.
Daily

Another way to generate multiple trajectories in time is to use the automated daily trajectory menu option to create an
endpoints file for each starting time. This differs from the previous approach where all trajectory endpoints were written
to one output file. Using this menu the model is executed in a script with a unique output file name for each simulation.
This method is required for trajectory clustering.

Clustering

Multiple trajectories may be aggregated into groups to minimize the difference between trajectories in that group using
clustering methods. The use of the clustering program requires one endpoints file for each trajectory. Normally a
minimum of 30 trajectories is recommended for clustering.

Geo-Location

Multiple trajectories can be run from sampling data to determine the highest frequency of upwind directions associated
with the non-zero measurements. The geo-location menu automatically configures the trajectory control file and runs
the model based upon a data file of measured values such that a trajectory is started from each location every hour that
there is a non-zero measurement.

Table of Contents
Trajectory / Special Runs / Daily
This special menu is used for automated trajectory simulations to run for very long durations: days, weeks, or months.
The script will replace the starting month, day and hour in the CONTROL file with values generated in the script from
the menu shown below. In this example the first model run starts at the time set in the CONTROL file: 95 10 16 00. A
new trajectory calculation will be started every 6 hours for one day. The model output files are named according to the
fields set in the control file and namelist file menus, but are appended with an eight digit date field. This approach
differs from a single simulation with multiple starting times in that each simulation is independent and a new output file
is created for each run.

The model should be configured and run for the first simulation time through the standard run menu to insure that the
simulation is configured correctly. If part of the simulation requires meteorological data from the previous month or the
next month, these should be included in the base simulation test.

As each day's simulation is started, the output file name is written to the display log. A trajectory is not started at the
simulation end time (95 10 17 00). In this example only four trajectories have been computed, the last of which starts at
95 10 16 18. However, all trajectories have the same duration of 12 h as the test trajectory example.

The script in this menu can be used to generate files for input to the trajectory frequency analysis program or the
trajectory clustering program.
Table of Contents
Trajectory / Special Runs / Clustering
The Trajectory Cluster Analysis window has the series of tasks necessary for running a trajectory cluster analysis. This
differs from most, if not all, other HYSPLIT GUI windows that only run one program or do one task. Given a set of
trajectories beginning at one location, the cluster analysis will objectively result in sub-sets of trajectories, called
clusters, that are each different from the other sub-sets. The program will usually produce at least one possible outcome
set of clusters. If more than one outcome is given, the user must then subjectively choose one for the final result. The
trajectories to be clustered can be created in a variety of different approaches. Trajectory output file names should begin
with a common base name as defined in the setup menu tdump and are then followed by some arbitrary identification
text (e.g. date), for example, as created by the Run Daily menu option.

Cluster member trajectories are assigned based on latitude and longitude as described below, not height. Diagnostic
variables (precipitation, etc) in the trajectory endpoints files are ignored.

Description of clustering process:

Initially, total spatial variance is zero. Each trajectory is defined to be a cluster, in other words, there are N trajectories
and N clusters. For the first iteration, which two clusters (trajectories) are paired? For every combination of trajectory
pairs, the cluster spatial variance (SPVAR) is calculated. SPVAR is the sum of the squared distances between the
endpoints of the cluster's component trajectories and the mean of the trajectories in that cluster. Then the total spatial
variance (TSV), the sum of all the cluster spatial variances, is calculated. The pair of clusters combined are the ones
with the lowest increase in total spatial variance. After the first iteration, the number of clusters is N-1. Clusters paired
always stay together.

D = distance between a trajectory endpoint and the corresponding cluster-mean endpoint

SPVAR = SUM(all trajectories in cluster) [SUM(all trajectory endpoints) {D*D} ]

TSV = SUM(all SPVAR)

For the second iteration, which two clusters are paired? The clusters are either individual trajectories or the cluster of
two trajectories that were initially paired. Again every combination is tried, and the SPVAR, and TSV for each is
calculated. The two clusters combined are the ones that result in the lowest increase in TSV. The percent change in TSV
and number of clusters (N-2) are written to a file.

The iterations continue until the last two clusters are combined, resulting in N trajectories in one cluster.

In the first several clustering iterations the TSV increases greatly, then for much of the clustering it typically increases at
a small, generally constant rate, but at some point it again increases rapidly, indicating that the clusters being combined
are not very similar. This latter increase suggests where to stop the clustering and is clearly seen in a plot of percent
change in TSV vs. number of clusters, where the number of clusters are decreasing to the right on the plot. The iterative
step just before (to the left of on the plot) the large increase in the change of TSV gives the final number of clusters.
Typically there are a few "large" increases.

How to run the cluster analysis:

The window shown is from the Run Example case. For Run Standard, the Run_ID is "Standard", Hours to cluster is
"36", the Endpoints folder (directory) is "c:/hysplit/cluster/endpts", and the Number of clusters is set to "1". Run
Example performs cluster analysis on the example set of 12-h duration forward trajectories. Note one of the trajectories
has a duration less than 12 hours and so it is not clustered. The number of trajectories in the example set is small to keep
the cluster section of the HYSPLIT PC package a reasonable size.
Step 1. Inputs.

Run ID - A label to identify each run. The label ends at the first blank space. The other numeric inputs may be
part of the label. For instance if trajectories during 2004 from Ohio were clustered, the Run_ID could be
Ohio_2004. If you used 48-h trajectories, hourly endpoints, and every other trajectory, a label of
Ohio_2004_48_1_2 could be used. If you later decided to only use the first 36-h of the trajectory
Ohio_2004_36_1_2 might be used.

Hours to cluster - Trajectory durations up to the given hour are used. Must be a positive number. Time is from
trajectory origin whether backward or forward. Trajectories terminating before the given hour will NOT be
included in the clustering. Premature terminations commonly result from missing meteorology data or the
trajectory reaching the meteorological grid horizontal or top edge. 36 hours is typical.
Time interval - Identifies which endpoints along a trajectory to use. Typically every hourly endpoint is used (1).
For long trajectories, skipping endpoints will save computational time.

Trajectory skip - Identifies which trajectories in a folder to use. A value of 1 means every trajectory will be used;
2 means every other trajectory; 5 every 5th trajectory, etc. Useful with very large sets of trajectories.

Endpoints folder - All trajectory endpoints files containing 'tdump' in their name in this folder will be used for
clustering.

Working folder - Cluster files are written to this folder.

Archive folder - All cluster files may be moved to this folder for archiving or to remove the files from the working
directory. For permanent archiving, the files need to be moved or renamed since they will be overwritten by files
from subsequent runs.

Projection - Trajectory endpoint latitudes and longitudes are converted to grid points using the specified map
projection in the main cluster program "Run cluster". The projection for the plots is specified separately in Step 3.

Step 2. Run Cluster Program. Possible solutions to the cluster analysis will be available at the end of this step.

Make INFILE. Trajectories must have been run previously, such as via TRAJECTORY-Special Simulations – Run
Daily. All the trajectory endpoints files need to be in one folder and each must have the name “tdump” within its
filename. In this step, a file, INFILE, listing all the “tdump” files will be created in the working folder.

Note on endpoints files - There can be only one trajectory per file. At least 16 trajectories are needed after
trajectories are skipped, if specified.

Run Cluster The cluster analysis program is run here given the INFILE file, the trajectory endpoints files, and the
above inputs. On a typical PC, a cluster run with 365 trajectories, 36-h duration, and using every hourly endpoint,
will take a couple minutes. Going beyond several years of trajectories will result in a run that will take a long
time and/or use much memory.   A warning message is given for “larger” runs, but there can be hard to tell if a
"large" job fails due to lack of memory and/or is feasible.  

For example with 1100 trajectories it may appear not to be running. Try some intermediate runs - 600 trajectories,
900 trajectories - and note the run time. Add to the number of trajectories as reasonable. Let it run overnight. If it
takes "too long", increase the "time interval" to say 2 to use every other trajectory endpoint and/or set the
"trajectory skip" to say 3 to use every third trajectory. Another option to bypass possible GUI errors is to run
cluster.exe from the command line. To do this, open the "Command Prompt" (for Windows, Start, All Programs,
Accessories), cd to your cluster working directory (e.g. cd \hysplit\cluster\working), and run
\hysplit\exec\cluster.exe. If the file with the input values CCLUSTER exists in the cluster working directory,
cluster.exe will start running, otherwise you will be prompted for the input values. When done, "exit" the
Command Prompt, and return to the GUI for the subsequent processing.

 
The cluster program produces these output files:

CLUSTER – trajectory start date/time and endpoints (tdump) filename for all the trajectories in INFILE; then for
each pass, a listing of the trajectories in each cluster.

DELPCT – the change in total spatial variance of all the clusters from one pass to the next.

CLUSTERno – the filename and trajectory start date/time of trajectories, if any, not clustered; used to create
cluster results (CLUSLIST)
CMESSAGE – diagnostics output file

Display plot (optional) shows the percent change in total spatial variance (TSV) for the final 30 iterations. This
data is from the file DELPCT. 
Generally there can be seen at least one time when there is a large increase in the
change of TSV, indicating that “different”, rather than “similar”, clusters are being paired and that the cluster
process should stop before that occurs.

View possible final number of clusters. Typically a pairing of "different" clusters is indicated by a 30% change in
the percent change in total spatial variance (see Step 2, Display plot). Run lists the possible final cluster numbers.
If the 30% criterion does not identify any, the 20% criterion may be chosen. The maximum is arbitrarily set to 20
clusters.

Step 3 Get Results. This step may be repeated using different numbers of clusters. If you exit the GUI, but have not
archived your results, enter the Run_ID and the Working folder again from Step 1, then continue with Step 3. If you
have already archived the results, but want to try a different number of clusters, manually copy everything from the
archive folder to the cluster working folder, enter the Run_ID and the Archive and working folders, then enter the
number of clusters, etc.

Number of clusters Enter the final number of clusters, one of the values listed in Step 2, Run. In general, this will
be a value where the plot from Step 2,Display plot shows a sharp upward turn.

Assign trajectories to clusters "Run" creates a text file listing the trajectory start date/times and filenames in each
cluster (CLUSLIST_NF, where NF is the final number of clusters). Note Cluster #0 is for trajectories not
clustered. Depending on the application, this text file may be the outcome and the plots below may not be needed.

Display Means produces one map with the mean trajectory for each cluster (1-NF), given the final number of
clusters, NF. The arbitrary cluster number and percent of trajectories in each cluster are given.

Display Clusters produces one map for each cluster, showing the trajectories in each cluster.

Trajectories not used are those input to the cluster program, i.e. in the endpoints directory, and at the given skip
interval, that terminate before the trajectory duration equal to the Step 1, Hours to cluster. This occurs when the
trajectory reaches the meteorology grid edge or when there is missing meteorology. Trajectories not used
immediately displays a plot showing the trajectories not used and opens the Trajectory Cluster Display window,
from which the cluster-mean trajectories for the trajectories not used (cluster #0) and all the other clusters may be
displayed. Note the plot showing the trajectories not used must have been previously created in Step 3, Display
Clusters, though it is not displayed there.

Archive All files are moved, not copied, to the given directory. Files created in Step 3 contain the final number of
clusters (NF) in their filename; hence output using various values of NF may be readily archived.

Table of Contents
Trajectory / Special Runs / Geo-Location
Summary: This menu is used to configure the model to execute a script to run multiple iterations of the backward
trajectory calculation for periods that correspond with individual measured sampling data. The trajectory results can
then be overlaid using the frequency plot menu option to indicate the most probable source regions. The CONTROL file
should have been previously defined for a forward calculation that corresponds to the sampling period of the measured
data. The measured data file must be in the DATEM format. The configuration process is very similar to that described
for dispersion simulations.

Step 1: defines the measured data input used in this series of calculations. The trajectory model is run backward, from
each of the sampling locations, for each sampling period with a non-zero measurement. Measured values less than or
equal to the value in the text entry box are considered to be zero. By default, three trajectories are started for each
sample, one at the beginning, middle, and end of the period. However, any starting trajectory combination can be
selected using the checkboxes.

Sampling data files are in the ASCII text DATEM format. The first record provides some general information, the
second record identifies all the variables that then follow on all subsequent records. There is one data record for each
sample collected. All times are given in UTC.

An example measured data file can be used with the sample meteorological data to configure multiple trajectory
simulations starting on 95 10 16 00. Note that the sampling data run from 0600 on the 16th through 1200 on the 17th,
suggesting at least a 30 h if not 36 h trajectory is required to span the measurement data domain. First create a
CONTROL file by running a forward simulation for one trajectory that spans the computational period.

Step 2: creates three CONTROL.{xxx} files for each sampling location with a non-zero measurement at the beginning,
middle, and end of the sampling period using the dat2cntl program. The CONTROL files are numbered sequentially
from 001 to a maximum of 999. The trajectory output files are labeled using the base name from the CONTROL file
followed by the sampler ID number and the starting date and time of the trajectory. A sampler collecting two contiguous
in-time samples will have the same trajectory computed at the end of one sampling period and the beginning of the next
sampling period because there will be two different CONTROL files for each. However, the output file name will be the
same and hence the results will not be double counted in any statistical analysis. Using the sample data, running step
two creates 45 CONTROL files from 15 non-zero measurements.

Step 3: sequentially runs the trajectory simulations starting with CONTROL.001 through the last available control file.
Each simulation uses the same namelist configuration. Once the calculation has completed, the individual trajectories
can be gridded and displayed using the trajectory frequency menu, the results of which are shown below.
In this case the measurement data were created using a forward dispersion simulation from 40N 80W which is near the
maximum point of the trajectory frequency plot, the position of the maximum is also indicated along the left edge plot
label.

Table of Contents
Trajectory / Setup Run / Control File Format
The trajectory model input control file can be created using any text editor. However if the GUI is not being used, it
would be easier to let the model create the initial file based upon standard output prompts. These are described in more
detail below. When data entry is through the keyboard (a file named CONTROL is not found), a STARTUP file is
created. This contains a copy of the input, and which later may be renamed to CONTROL to permit direct editing and
model execution without data entry. If you are unsure as to a value required in an input field, just enter the forward slash
(/) character, and the indicated default value will be used. This default procedure is valid for all input fields except
directory and file names. An automatic default selection procedure is also available for certain input fields of the
CONTROL file when they are set to zero. Those options are discussed in more detail below. Each input line is numbered
(only in this text) according to the order it appears in the file. A number in parenthesis after the line number indicates
that there is an input loop and multiple entry lines may be required depending upon the value of the previous entry.

1- Enter starting time (year, month, day, hour, {minutes optional})

Default: 00 00 00 00 {00}

Enter the two digit values for the UTC time that the calculation is to start. Use 0's to start at the beginning (or end)
of the file according to the direction of the calculation. All zero values in this field will force the calculation to use
the time of the first (or last) record of the meteorological data file. In the special case where year and month are
zero, day and hour are treated as relative to the start or end of the file. For example, the first record of the
meteorological data file usually starts at 0000 UTC. An entry of "00 00 01 12" would start the calculation 36
hours from the start of the data file. The minutes field is optional but should only be used when the start time is
explicitly set to a value.

2- Enter number of starting locations

Default: 1

Simultaneous trajectories can be calculated at multiple levels or starting locations. Specification of additional
locations for certain types of simulations can also be accomplished through the Special Simulations menu tab, or
through manual editing of the CONTROL file and running the model outside of the GUI. When multiple starting
locations are specified, all trajectories start at the same time. A multiple trajectory in time option is available
through the Advanced menu through a namelist file parameter setting.

3(1)- Enter starting location (lat, lon, meters)

Default: 40.0 -90.0 50.0

Trajectory starting position in degrees and decimal (West and South are negative). Height is entered as meters
above ground-level. An option to treat starting heights as relative to mean-sea-level is available through the
Advanced menu through a namelist file parameter setting.

4- Enter total run time (hours)

Default: 48

Specifies the duration of the calculation in hours. Backward calculations are entered as negative hours. A
backward trajectory starts from the trajectory termination point and proceeds upwind. Meteorological data are
processed in reverse-time order.

5- Vertical motion option (0:data 1:isob 2:isen 3:dens 4:sigma 5:diverg 6:msl2agl 7:average 9: fix-up&down 10: fix-
down)
Default: 0

Indicates the vertical motion calculation method. The default "data" selection will use the meteorological model's
vertical velocity fields; other options include {isob}aric, {isen}tropic, constant {dens}ity, constant internal
{sigma} coordinate, computed from the velocity {diverg}ence, vertical coordinate remapping from MSL to AGL,
an option to spatially average the vertical velocity, and two options allowing user-specified fixed vertical
velocities. In the averaging option (7), the averaging distance is automatically computed from the ratio of the
temporal frequency of the data to the horizontal grid resolution.

While not yet available in the GUI, the user-specified fixed-velocity option with both up and down motion (9)
requires the user to also set the rise (WBWR) and fall (WBWF) velocities (m/s) and the height at which the
motion changes from up to down (WBBH) (m) in the SETUP.CFG file. This mode could be used to simulate a
weather balloon with known upward and downward velocities and a known burst height. The downward velocity
(WBWF) is specified as a positive number even though the motion is downward. The burst height (WBBH) is
meters above ground level unless the advanced parameter KMSL is set to 1 to indicate meters above mean sea
level. The top of the model domain (next parameter) must be greater than WBBH. In the fixed-downward-
motion-only mode (10) (also not available yet in GUI), the user must simply specify the downward velocity
(WBWF, m/s) in the SETUP.CFG file. In both of these trajectory-only "weather balloon" modes, the simulation
stops once the trajectory hits the ground, even if the total run time is longer than this. Also, since a typical
weather-balloon flight (e.g., with WBWR=4.0 m/s, WBWF=10.0 m/s, and WBBH=20000.0 m-agl) is only 1-2
hrs, it is recommended that the trajectory endpoint output frequency be set to 1 minute in the SETUP.CFG file
(TOUT=1) to provide output information with sufficient time resolution.

6- Top of model domain (internal coordinates m-agl)

Default: 10000.0

Sets the vertical limit of the internal meteorological grid. If calculations are not required above a certain level,
fewer meteorological data are processed thus speeding up the computation. Trajectories will terminate when they
reach this level. A secondary use of this parameter is to set the model's internal scaling height - the height at
which the internal sigma surfaces go flat relative to terrain. The default internal scaling height is set to 25 km but
it is set to the top of the model domain if the entry exceeds 25 km. Further, when meteorological data are provided
on terrain sigma surfaces it is assumed that the input data were scaled to a height of 20 km (RAMS) or 34.8 km
(COAMPS). If a different height is required to decode the input data, it should be entered on this line as the
negative of the height. HYSPLIT's internal scaling height remains at 25 km unless the absolute value of the
domain top exceeds 25 km.

7- Number of input data grids

Default: 1

Number of simultaneous input meteorological files. The following two entries (directory and name) will be
repeated this number of times. A simulation will terminate when the computation is off all of the grids in either
space or time. Trajectory calculations will check the grid each time step and use the finest resolution input data
available at that location at that time. When multiple meteorological grids have different resolution, there is an
additional restriction that there should be some overlap between the grids in time, otherwise it is not possible to
transfer a trajectory position from one grid to another. If multiple grids are defined and the model has trouble
automatically transferring the calculation from one grid to another, the sub-grid size may need to be increased to
its maximum value.

While not available in the GUI, if the user is creating the CONTROL file themselves, two numbers can be
specified here: the first being the number of unique grids and the second being the number of files in each grid.
For example, an entry of 2 12 would mean that there are met files for 2 different grids (e.g., a regional and a
global grid), and that there are 12 files being specified for each grid. The grids should be specified in order of
resolution, with the highest resolution grids (i.e, the smallest horizontal spacing between grid points) being
specified before lower resolution grids. The two entries for each file (directory and filename) are repeated for each
file in the first grid, and then for each file in the second grid, and so on, for any subsequent grids. Note that the
same number of files are required for each grid in this approach. Without the use of this approach (i.e., when only
one number is specified) the maximum number of files that can be used in the simulation is relatively small, but
with this second approach, a much larger number of files can be used in the simulation.

8(1)- Meteorological data grid # 1 directory

Default: ( \main\sub\data\ )

Directory location of the meteorological file on the grid specified. Always terminate with the appropriate slash (\
or /).

9(2)- Meteorological data grid # 1 file name

Default: file_name

Name of the file containing meteorological data. Located in the previous directory.

10- Directory of trajectory output file

Default: ( \main\trajectory\output\ )

Directory location to which the text trajectory end-points file will be written. Always terminate with the
appropriate slash (\ or /).

11- Name of the trajectory endpoints file

Default: file_name

The trajectory end-points output file is named in this entry line.

Table of Contents
Trajectory / Display / Endpoint File Format
The trajectory model generates its own text output file of ASCII end-point positions. The trajectory display program
processes the end-point file. The format of the file is given below:

Record #1

I6 - Number of meteorological grids used in calculation

Records Loop #2 through the number of grids

A8 - Meteorological Model identification


5I6 - Data file starting Year, Month, Day, Hour, Forecast Hour

Record #3

I6 - number of different trajectories in file


1X,A8 - direction of trajectory calculation (FORWARD, BACKWARD)
1X,A8 - vertical motion calculation method (OMEGA, THETA, ...)

Record Loop #4 through the number of different trajectories in file

4I6 - starting year, month, day, hour


2F9.3 - starting latitude, longitude
F8.1 - starting level above ground (meters)

Record #5

I6 - number (n) of diagnostic output variables


n(1X,A8) - label identification of each variable (PRESSURE, THETA, ...)

Record Loop #6 through the number of hours in the simulation

I6 - trajectory number
I6 - meteorological grid number or antecedent trajectory number
5I6 - year month day hour minute of the point
I6 - forecast hour at point
F8.1 - age of the trajectory in hours
2F9.3 - position latitude and longitude
1X,F8.1 - position height in meters above ground
n(1X,F8.1) - n diagnostic output variables (1st to be output is always pressure)

Table of Contents
Concentration / Help
The concentration menu tab is composed of six main sections: setting up the simulation, executing the model, displaying
the concentrations, various utility programs for converting the output to other formats, configuring special simulations,
and simulations in a multi-processor environment. The model can be entirely configured and executed through the
menu. However for experienced users, each component may be run independently from the command line.

In the Concentration Setup menu, the entire purpose of the GUI is to configure the model's input CONTROL file. This is
a text file that configures the simulation parameters. Once the input parameters are set to their desired value, the model
is executed from the Run Standard Model menu tab. When complete, the output window is closed and Display Options
menu is used to draw and display the concentration contours from the model's binary concentration output file. The
Special Simulations menu is used to configure several different customized simultaneous for ensemble applications,
source-receptor matrices, and some simple chemistry simulations. The Multi-processor tab invokes some of the same
special simulations but will only run under a multi-processor- computing environment. Normally this is not an option
available under MS Windows.

For inexperienced users, a review of the Quick Start Help menu is highly suggested, which goes through a concentration
computation step-by-step using the example meteorological data file.

Table of Contents
Concentration / Quick Start Help
Upon installation the first time the model will be configured to run the example case discussed in more detail below.
The "Quick Start" menu tab can be used to run the example or previous simulation in one-step. For more detailed
simulation configurations, follow the steps below.

The easiest way to run the model is to use the GUI menu to edit the model's input CONTROL file. For the purposes of
this demonstration appropriate meteorological files are provided. If for some reason the menu system is not available,
the CONTROL file can be created manually.

Step 1 - start the GUI menu system using the desktop shortcut Hysplit to \working\hysplit.tcl. A widget will appear with
the HYSPLIT graphic and three button options: Menu, Help Exit. Click on Menu.

Step 2 - The four main menus of Hysplit will appear: Meteorology, Trajectory, Concentration, and the optional
Advanced menu. An additional small widget underneath the main menu gives the current Hysplit version information.
Do not delete this widget as it will terminate the GUI. It provides the reference frame for the model's standard output
and messages and the version number required for updates. Click on Concentration.

Step 3 - Under the concentration menu there are also five options: concentration setup, run standard model,
concentration display, utility programs, and special simulations. In general, they should be executed in sequential order.
Click on Setup Run.

Step 4 - The Setup Run menu brings up similar starting information requirements as with the trajectory menu. There are
three additional sub-menus: Pollutant - that can be used set the emission rate, duration, and start time of the emission;
Grids - to set the location, resolution, levels, and averaging times of the concentration output grid; and Deposition - to
set the characteristics of each pollutant. Click on Retrieve, enter name of sample pre-configured control file:
sample_conc, then click on OK. After the data entry widget is closed, click on Save and the setup menu will close.

Step 5 - From the main concentration menu tab select Run Model, which copies the setup configuration to the model's
input CONTROL file and starts the model simulation. Messages will appear on standard output showing the progress of
the calculation or after the calculation has completed. Be patient as concentration calculations may take considerably
longer than trajectory calculations. Click on Exit to close the message window. At that point the binary concentration
output file is ready to be converted to a graphical display.

Step 6 - Click on Display / Concentration and then select Contours to run a special program that converts the binary
concentration file to the HTML file concplot.html, suitable for printing. The display widget contains multiple options for
different pollutants (if defined), data grids, levels, and contour options. These are discussed in more detail in the
graphics section. For this example, accept the defaults and just click on Execute Display. If a web browser has been
installed and properly associated with .html files, then it will be automatically invoked by the GUI. If the viewer does
not open, it may be necessary to manually edit the file hysplit.tcl for the directory entry associated with your web
browser program. The output file can be printed through a web browser. The 12 hour average air concentration for a one
hour release is shown in the illustration below.
Table of Contents
Concentration / Setup Run / Pollutant, Deposition, Grids
There are three menu choices under this tab: Pollutant, Grid, and Deposition. The first permits editing of the emission
rate parameters, the second defines the concentration output grid, and the third the deposition characteristics of the
pollutant, if that feature is enabled. This menu is illustrated below. To edit the parameters for a specific species entry
just select the appropriate checkbox.

Note that up to 7 pollutant species may be defined by changing the Num parameter. However, in the current version, the
pollutant and deposition menus must both reference the same number of species. Multiple species simulations are
calculated independently, hence there is no computational benefit by doing two different simulations or combining both
species in one simulation. The multiple species option is primarily used for chemical transformation simulations. A
simple example is available from the configuration menu checkbox of "10% per hour", which transforms species #1 to
species #2 at a rate of 10% per hour. The transformation occurs on the same particle and is discussed in more detail in
the Advanced Applications section. More complex transformations require a linkage with compatible external modules.
None are available at this time for public distribution.

Table of Contents
Concentration / Setup Run / Pollutant Definition
There are four CONTROL file entries, lines 11(1) through 14(4), that correspond with each line of the pollutant menu
shown in the illustration below.

10- Number of different pollutants

Default: 1

Multiple pollutant species may be defined for emissions. Each pollutant is assigned to its own particle or puff and
therefore may behave differently due to deposition or other pollutant specific characteristics. Each will be tracked
on its own concentration grid. The following four entries are repeated for each pollutant defined.

11(1)- Pollutant four Character Identification

Default: TEST

Provides a four-character label that can be used to identify the pollutant. The label is written with the
concentration output grid to identify output records associated with that pollutant and will appear in display
labels. Additional user supplied deposition and chemistry calculations may be keyed to this identification string.

12(2)- Emission rate (per hour)

Default: 1.0

Mass units released each hour. Units are arbitrary except when specific chemical transformation subroutines are
associated with the calculation. Output air concentration units will be in the same units as specified on this line.
For instance an input of kg/hr results in an output of kg/m3. When multiple sources are defined this rate is
assigned to all sources unless the optional parameters are present on line 3(1).

13(3)- Hours of emission

Default: 1.0

The duration of emission may be defined in fractional hours. An emission duration of less than one time-step will
be emitted over one time-step with a total emission that would yield the requested rate over the emission duration.

14(4)- Release start time: year month day hour minute


Default: [simulation start]

The previously specified hours of emission start at this time. An entry of zero's in the field, when input is read
from a file, will also result in the selection of the default values that will correspond with the starting time of the
meteorological data file. Day and hour are treated as relative to the file start when month is set to zero.

Temporal or Area Emission Variations

This menu is only designed to input point source emission rates. Unless additional input values are provided in the
control file after each emission location, the same rate will apply to all defined point sources for the duration of the
emission. When more complex emission scenarios are required, emission data can be read in from a file that defines a
diurnal emission cycle for any number of pollutants at any number of locations. If the emission rate is to vary in time
beyond one diurnal cycle, there is another input file that can be defined to set a new rate with each emission cycle. In
this latter scenario the rate as well as the location may be changed.

Previous Section of the CONTROL file


Next Section of the CONTROL file

Table of Contents
Concentration / Setup Run / Grid Definition
This section is used to define the grid system to which the concentrations are summed during the integration and
subsequently for post-processing and display of the model's output. There are 10 entries in the CONTROL file for each
concentration grid that has been defined. The lines 16(1) through 25(10) correspond with each of the menu items shown
in the illustration below.

Dispersion calculations are performed on the computational (meteorological) grid without regard to the definition or
location of any concentration grid. Therefore it is possible to complete a simulation and have no results to view if the
concentration grid was in the wrong location. In addition, very small concentration grid spacing will reduce the model's
integration time step and may result is substantially longer simulation clock times.

15- Number of simultaneous concentration grids

Default: 1

Multiple or nested grids may be defined. The concentration output grids are treated independently. The following
10 entries will be repeated for each grid defined.

In the special case with the first grid heights above ground level and the second grid heights above mean sea level,
this value is set to negative 2, and the MSL flag is not set. Similarly if this value is set to negative 4, the first and
third grids are referenced to ground level, the second and fourth to mean sea level. (See section "Height of each
level" below.)

16(1)- Center Latitude, Longitude (degrees)


Default: [source location]

Sets the center position of the concentration sampling grid in degrees and decimal. Input of zero's will result in
selection of the default value, the location of the emission source. Sometimes it may be desirable to move the grid
center location downwind near the center of the projected plume position.

17(2)- Grid spacing (degrees) Latitude, Longitude

Default: 1.0 1.0

Sets the interval in degrees between nodes of the sampling grid. Puffs must pass over a node to contribute
concentration to that point and therefore if the spacing is too wide, they may pass between intersection points.
Particle model calculations represent grid-cell averages, where each cell is centered on a node position, with its
dimensions equal to the grid spacing. Finer resolution concentration grids require correspondingly finer
integration time-steps. This may be mitigated to some extent by limiting fine resolution grids to only the first few
hours of the simulation.

In the special case of a polar (arc,distance) concentration grid, defined when the namelist variable cpack=3, the
definition changes such that the latitude grid spacing equals the sector angle in degrees and the longitude grid
spacing equals the sector distance spacing in kilometers.

18(3)- Grid span (deg) Latitude, Longitude

Default: [180.0] [360.0]

Sets the total span of the grid in each direction. For instance, a span of 10 degrees would cover 5 degrees on each
side of the center grid location. A plume that goes off the grid would have cutoff appearance, which can
sometimes be mitigated by moving the grid center further downwind.

Grid span and spacing should be set according to the scale of the run. Small scale runs can have fine spacing and
small span; the opposite for large scale runs.

In the special case of a polar (arc,distance) concentration grid, defined when the namelist variable cpack=3, the
definition changes such that the latitude span always equals 360.0 degrees and the longitude span equals the total
downwind distance in kilometers. Note that the number of grid points equals 360/arc-angle or the total-distance
divided by the sector-distance.

19(4)- Enter grid # 1 directory

Default: ( \main\sub\output\ )

Directory to which the binary concentration output file for this grid is written. As in other directory entries a
terminating (\) slash is required.

20(5)- Enter grid # 1 file name

Default: file_name

Name of the concentration output file for each grid. See Section 6 for a description of the format of the
concentration output file.

21(6)- Number of vertical concentration levels

Default: 1
The number of vertical levels in the concentration grid including the ground surface level if deposition output is
required.

22(7)- Height of each level (m)

Default: 50

Output grid levels may be defined in any order for the puff model as long as the deposition level (0) comes first (a
height of zero indicates deposition output). Air concentrations must have a non-zero height defined. A height for
the puff model indicates the concentration at that level. A height for the particle model indicates the average
concentration between that level and the previous level (or the ground for the first level). Therefore heights for the
particle model need to be defined in ascending order. Note that the default is to treat the levels as above-ground-
level (AGL) unless the MSL (above Mean-Sea-Level) flag has been set (see advanced configuration).

23(8)- Sampling start time: year month day hour minute

Default: [simulation start]

Each concentration grid may have a different starting, stopping, and output averaging time. Zero entry will result
in setting the default values. Default values are to use the simulation starting time (1). "Backward" calculations
require that the stop time should come before the start time.

If the month field is zero, then the minute and hour fields in the sample start will be used to set the sample start
time relative to the simulation start time (1).
For example, if the simulation starting time is 18 10 31 05 and the sample start is set at 00 00 00 02 00 then the
sample start time will be at 18 10 31 07 00.

Warning: The adjustable time step feature (DELT=0 in SETUP.CFG) should only be used when the sampling
start time minute field is zero. If the minute field is non-zero, then the time step should be set to be a factor of the
minute field. (For example, if you are setting the sampling start to be 30 minutes past the hour, you could set
DELT to be 30, 15, 10, 6, 5, 3, 2 or 1.) This needs to be done to ensure that the time HYSPLIT begins the
sampling period lines up with the start of a new time step.

24(9)- Sampling stop time: year month day hour minute

Default: [One year from simulation starting time]

After this time no more concentration records are written. Early termination on a high resolution grid (after the
plume has moved away from the source) is an effective way of speeding up the computation for high resolution
output near the source because once turned-off that particular grid resolution is no longer used for time-step
computations.

Zero entry (00 00 00 00 00) results in setting the default values. Default values are one year from the simulation
starting time (1). (Or one year before the simulation start in the case of backward runs)

If the month field is zero, then the minute and hour fields in the sample stop will be used to set the sample stop
time relative to the sample start time (23(8)).
This was implemented in V985.
For example, if the sample start is set to 18 10 31 07 00 and the sample stop is 00 00 00 05 00 then the sample
stop will be set to 18 10 31 12 00.

25(10)- Sampling interval: type hour minute

Default: 0 24 0
Each grid may have its own sampling or averaging interval. The interval can be of three different types: averaging
(type=0), snapshot (type=1), or maximum (type=2). Averaging will produce output averaged over the specified
interval. For instance, you may want to define a concentration grid that produces 24-hour average air
concentrations for the duration of the simulation, which in the case of a 2-day simulation will result in 2 output
maps, one for each day. Each defined grid can have a different output type and interval. Snapshot (or now) will
give the instantaneous output at the output interval, and maximum will save the maximum concentration at each
grid point over the duration of the output interval. Therefore, when a maximum concentration grid is defined, it is
also required to define an identical snapshot or average grid over which the maximum will be computed. There is
also the special case when the type value is less than zero. In that case the value represents the averaging time in
hours and the output interval time represents the interval at which the average concentration is output. For
instance, a setting of {-1 6 0} would output a one-hour average concentration every six hours.

Previous Section of the CONTROL file


Next Section of the CONTROL file

Table of Contents
Concentration / Setup Run / Deposition Definitions
This section is used to define the deposition parameters for emitted pollutants. The number of deposition definitions
must correspond with the number of pollutants released. There is a one-to-one correspondence. There are 5 entries in the
CONTROL file for each defined pollutant. The lines 27(1) through 31(5) correspond with each of the menu items shown
in the illustration below. The radio-buttons along the top can be used to set default deposition parameters, which can
then be edited as required in the text entry section. The second line of radio-buttons define the deposition values for
some preconfigured species: Cesium, Iodine (gaseous and particulate), and Tritium. The reset button sets all deposition
parameters back to zero.

Note that turning on deposition will result in the removal of mass and the corresponding reduction in air concentration,
the deposition will not be available in any output unless height "0" is defined as one of the concentration grid levels.

26 - Number of pollutants depositing

Default: number of pollutants on line # 10

Deposition parameters must be defined for each pollutant species emitted. Each species may behave differently
for deposition calculations. Each will be tracked on its own concentration grid. The following five lines are
repeated for each pollutant defined. The number here must be identical to the number on line 10. Deposition is
turned off for pollutants by an entry of zero in all fields.

27(1)- Particle: Diameter (µm), Density (g/cc), and Shape

Default: 0.0 0.0 0.0

These three entries are used to define the pollutant as a particle for gravitational settling and wet removal
calculations. A value of zero in any field will cause the pollutant to be treated as a gas. All three fields must be
defined (>0) for particle deposition calculations. However, these values only need to be correct only if
gravitational settling or resistance deposition is to be computed by the model. Otherwise a nominal value of 1.0
may be assigned as the default for each entry to define the pollutant as a particle. If a dry deposition velocity is
specified as the first entry in the next line (28), then that value is used as the particle settling velocity rather than
the value computed from the particle diameter and density.

If gravitational settling is on and the Shape is set to a negative value then the Ganser (1993) calculation is used to
replace Stokes equation for estimating particle fallspeeds. The absolute value of the Shape factor is used for the
calculation. The Stokes equation overestimates particle fallspeeds for particles larger than about 20 micron
diameter. As this diameter often lies within size distributions of volcanic ash particles, it is desirable to use the
Ganser formulation so that particle fallspeeds can be computed accurately. Ganser, G.H., 1993: A rational
approach to drag prediction of spherical and nonspherical particles. Powder Technology, 77, 143-152. .

The particle definitions can be used in conjunction with a special namelist parameter NBPTYP that determines if
the model will just release the above defined particles or create a continuous particle distribution using the particle
definitions as fixed points within the distribution. This option is only valid if the model computes the gravitational
settling velocity rather than pre-defining a velocity for each particle size.

28(2)- Deposition velocity (m/s), Pollutant molecular weight (Gram/Mole), Surface Reactivity Ratio, Diffusivity Ratio,
Effective Henry's Constant

Default: 0.0 0.0 0.0 0.0 0.0

Dry deposition calculations are performed in the lowest model layer based upon the relation that the deposition
flux equals the velocity times the ground-level air concentration. This calculation is available for gases and
particles. The dry deposition velocity can be set directly for each pollutant by entering a non-zero value in the first
field. In the special case where the dry deposition velocity is set to a value less than zero, the absolute value will
be used to compute gravitational settling but with no mass removal. The dry deposition velocity can also be
calculated by the model using the resistance method which requires setting the remaining four parameters
(molecular weight, surface reactivity, diffusivity, and the effective Henry's constant). See the table below for more
information. For particles (particle diameter, density, and shape are all nonzero), molecular weight is used as a
flag to turn on and off the resistance deposition scheme for particles. Setting the molecular weight to 1.0 will turn
on the resistance deposition scheme and setting the molecular weight to 0.0 will turn it off resulting in just
gravitational settling playing a role in dry deposition.

Note that the normal deposition mode is for particles to lose mass to deposition when those particles are within the
deposition layer. Thus deposition will not decrease the number of computational particles in the model, but only
decrease the amount of mass carried on the particles. An option to deposit the entire particle's mass at the surface (the
particle is removed) when subjected to deposition is available. See Deposition and Decay for further information.
29(3)- Wet Removal: Actual Henry's constant, In-cloud (GT 1 =L/L; LT 1 =1/s), Below-cloud (1/s)

Default: 0.0 0.0 0.0

Suggested: 0.0 8.0E-05 8.0E-05

Henry's constant defines the wet removal process for soluble gases. It is defined only as a first-order process by a
non-zero value in the field. Wet removal of particles is defined by non-zero values for the in-cloud and below-
cloud parameters. In-cloud removal can be defined as a ratio of the pollutant in rain (g/liter) measured at the
ground to that in air (g/liter of air in the cloud layer) when the value in the field is greater than one. For within-
cloud values less than one, the removal is defined as a time constant. Below-cloud removal is always defined
through a removal time constant. The default cloud bottom and top RH values can be changed through the
SETUP.CFG namelist file. Wet removal only occurs in grid cells with both a non-zero precipitation value and a
defined cloud layer.

30(4)- Radioactive decay half-life (days)

Default: 0.0

A non-zero value in this field initiates the decay process of both airborne and deposited pollutants. The particle
mass decays as well as the deposition that has been accumulated on the internal sampling grid. The deposition
array (but not air concentration) is decayed until the values are written to the output file. Therefore, the decay is
applied only the the end of each output interval. Once the values are written to the output file, the values are fixed.
The default is to decay deposited material. This can be turned off so that decay only occurs to the particle mass
while airborne by setting the decay namelist variable to zero.

31(5)- Pollutant Resuspension (1/m)


Default: 0.0

A non-zero value for the re-suspension factor causes deposited pollutants to be re-emitted based upon soil
conditions, wind velocity, and particle type. Pollutant re-suspension requires the definition of a deposition grid, as
the pollutant is re-emitted from previously deposited material. Under most circumstances, the deposition should
be accumulated on the grid for the entire duration of the simulation. Note that the air concentration and deposition
grids may be defined at different temporal and spatial scales. Note that more computational particles will be
generated when using this option.

Previous Section of the CONTROL file

Table of Contents
Concentration / Run Model
Once the Concentration Setup menu has been closed with the Save button, the changes to the simulation parameters are
copied to default_conc. Clicking on the Run Standard Model menu tab first copies default_conc to CONTROL and then
runs the trajectory model executable, hycs_std. The executable, by default, attempts to open a file named CONTROL to
read all the required input parameters. If not found, the model will prompt to standard output for values from standard
input. This condition should not occur running the model through the GUI.

In the situation where the namelist file SETUP.CFG has been created through the Advanced/Configuration menu tab,
the message shown below will appear. If the intent was to run using this file, then continue, otherwise one can delete the
file and run, or terminate the simulation. The situation may arise that the namelist file was created for a previous
simulation and is not needed this time.

When the model execution starts, output messages are written to a special window. Successful completion of a
simulation will show a message similar to the example shown in the illustration below:

Additional run-time diagnostic messages and other error messages are always written to a file called MESSAGE. This
file may be viewed through one of the Advanced Menu tabs. Depending upon the nature of the error message, perhaps a
failure in the model initialization process, error messages may also appear in above window. Once the model has
completed, press Exit to close the window.

Sometimes a configuration (long duration, very fine grid, too many particles) will lead to extremely long run times,
perhaps requiring the model to be terminated prematurely. Closing the GUI will not terminate the simulation because it
always runs in background. In this situation it is necessary to use the cntl-alt-del key combination (Windows) to find
and kill the hycs_std executable.

Table of Contents
Concentration / Display / Contours
The concentration model generates a binary (big-endian) output file on a regular latitude-longitude grid, which is read
by the other programs to produce various displays and other output. The plotting program, concplot, can be accessed
through the GUI, which is shown in the illustration below, or it can be run directly from the command line. Most, but
not all, of the command line options are available through the GUI.
Normally only one input file is shown, unless multiple files have been defined in the Concentration Setup Run menu.
The default output file name is shown and unless the box is checked all frames (time periods and/or levels) will be
output to that one file. The program uses the map background file, arlmap, which by default is located in the \graphics
directory. Other customized map background files could be defined. Some of these higher resolution map background
files are available from the HYSPLIT download web page. This plotting program also supports the use of ESRI
formatted shapefiles.
The GIS output option will create an output file of the contour polygons (latitude-longitude vectors) in two different
format: the ESRI generate format for import into ArcMap or ArcExplorer, or XML formatted files packaged by Info-Zip
for import into Google Earth.

For multiple pollutant files, only one pollutant may be selected for display by individual levels, or averaged between
selected levels. These levels must have been predefined in the Concentration Setup menu. Multipliers can be defined for
deposition or air concentrations. Normally these values would default to 1.0, unless it is desired to change the output
units (for instance, g/m3 to ug/m3 would require a multiplier of 106).

Contours and color fill can be specified as black and white or color. The none option eliminates the black line defining
contours and only leaves the color fill. This option is incompatible with GIS output options, which require computation
of the contour vector. Contours can be determined DYNamically by the program, changing with each map, or FIXed to
be the same for all maps. A user can set the contour scaling (difference between contours) to be computed on an
EXPonential scale or a LINear scale.

A Python implementation of the concplot program is added to this distribution package. By default, the GUI uses
concplot built from FORTRAN code. To use the Python implementation, select the Python tab near the bottom of the
user interface before generating a plot.

Concplot Command Line Options

The Postscript/SVG conversion program (concplot), found in the /exec directory with all other executables, reads the
binary concentration output file, calculates the most optimum map for display, and creates the output file concplot.ps or
concplot.html. Multiple pollutant species or levels can be accommodated. Most routine variations can be invoked from
the GUI. More complicated conversions should be run from the command line using the following optional parameters:

concplot -[options (default)]

-a[Arcview GIS: (0)-no dump, 1-ESRI (log10), 2-ESRI (decimal), 3-Google Earth]

Selecting the ESRI Generate Format output creates an ASCII text file for each output time period that consists of
the value and latitude-longitude points along each contour. This file can be converted to an ESRI Shapefile or
converted for other GIS applications through the utility menu "GIS to Shapefile" ". The view checkbox would be
disabled to do just the GIS conversion without opening the Postscript viewer. Selecting Google Earth will create
a compressed file (*.kmz) for use in Google Earth; a free software package to display geo-referenced information
in 3-dimensions. You must have the free Info-Zip file compression software installed to compress the Google
Earth file and associated graphics. The Python implementation can take several formats. See the description for
the --more-gis-options option below.

+a[KML altitude mode: 0-clampedToGround, 1-relativeToGround]:

Selecting clampedToGround (0) positions the contoured concentrations flat on the Google Earth terrain, whereas
selecting relativeToGround will generate 3D contours be extending the edges of the contours to the ground from
the valid height of the concentration data.

-b[Bottom display level: (0) m]

= 0 - Represents the height (meters) below which no data will be processed for display. The level information is
interpreted according to the display (-d) definition.

-c[Contours: (0)]

= 0 - Dynamic contour values (10x intervals) are optimized for each map.
= 1 - Fixed contour values (10x intervals) are the same for all maps.
= 2 - Dynamic contour values (constant interval) are optimized for each map.
= 3 - Fixed contour values (constant interval) are the same for all maps.
= 4 - The contours are set by the user in conjunction with -v option.
=50 - Force contour interval x10 and set dynamically for each frame.
=51 - Force contour interval x10 and set as fixed for all frames.

+c[Contour value file: (0)]

= 0 - No file written
= 1 - Write contour values to text output file CONTUR

-d[Display: (1)-by level, 2-levels averaged]

= 1 - All output levels that fall between the bottom and top display heights are shown as individual frames. A
single level will be displayed if both bottom and top heights equal the calculation level or they bracket that level.
Deposition plots are produced if level zero data are available in the concentration file and the display height is set
to 0.

= 2 - The concentrations at all levels between the specified range are averaged to produce one output frame per
time period. If deposition data is available and a plot is required in addition to the air concentrations, then the
bottom height should be set to 0. Deposition is not averaged with air concentrations.

-e[Exposure units flag: (0)-concentrations, 1-exposure, 2-threshold, 3-special, 4-mass loading]

= 1 - A custom output format in which all the air concentrations have been converted to time-integrated units and
vertically averaged for all levels between the bottom and top heights.

= 4 - A custom output format in which all the air concentrations have been converted to mass loading.

-f[Frames: (0)-all frames one file, 1-one frame per file]

= 0 - All output frames (one per time period) in one file.


= 1 - Each time period is written to a file: concplot{frame number}.ps

-g[Graphic circle overlay: ( )-auto, 0-numb, numb:dist(50) km]

= ( ) - Auto selection procedure draws four circles with the distance between them determined by the program
algorithm.

= # - Specifies the number of circles with the default (50 km) distance interval.

= number:distance - specifies the number of circles and the distance interval between circles. For the special case
of zero circles with a distance specified (e.g. -0:1500) the program will fix the map with the top and bottom edge
at that distance from the center.

+g[Graphics type: (0)-Postscript, 1-SVG]

= 0 - Output in Postscript.
= 1 - Output in HTML containing Scalable Vector Graphics.
Note that this option is unavailable in the Python version.

-h[Hold map at center lat-lon: (source point), lat:lon]

= lat:lon - Forces the center of the map to be at the specific latitude-longitude point rather than the default source
location. This is normally used in conjunction with the -g option to get the same map each time or when there are
multiple source locations.
-i[Input file name: (cdump)]

= Name of the binary concentration file for which the graphics will be produced. Default name {cdump} or user
defined}.

-j[Graphics map background file name: (arlmap)]

The program first searches the local directory, then the ..\graphics directory for the name of the default map
background file (arlmap). Set this parameter to select the directory/name of any map background file of
compatible format or specify a special file of shape file file names.

-k[Kolor: 0-B&W, (1)-Color, 2-No lines Color, 3-No lines B&W]

= 0 - Uses gray shade patterns for the contour color fill.


= 1 - Uses the default four color fill pattern.
= 2 - Default color fill pattern but without black contour lines.
= 3 - Black and white color fill but without black contour lines.

-l[Label options: ascii code, (73)-open star]

The default plot symbol over the source location is an open star. This may be changed to any value defined in the
psplot ZapfDingbats library. For instance a blank, or no source symbol would be defined as -l32

-L[LatLonLabels: 0=none, (1)=auto, 2=set:{value tenths}]

= 0 - No latitude or longitude lines are drawn on the map


= 1 - Latitude and longitude lines spacing is determined automatically
= 2:tenths - line spacing is determined by the given value in tenths

-m[Map projection: (0)-Auto, 1-Polar, 2-Lambert, 3-Mercator, 4-CylEqu]

Normally the map projection is automatically determined based upon the size and latitude of the concentration
pattern. Sometimes this procedure fails to produce an acceptable map and in these situations it may be necessary
to force a map projection.

+m[Max-Min value plotting: 0=none, (1)=both, 2=values, 3=max-square]

The default (1) prints the maximum and minimum values below the contours and plots a red square, the size of
the concentration grid, at the location of the maximum concentration value. These can be turned off individually
using this command line option. Note the +m rather than -m prefix.

-n[Number of time periods: (0)-all, number, min:max,-increment]

= 0 - All time periods in the input file are processed.


= # - Sets the number of time periods to be processed starting with the first.
= #1:#2 - Processes time periods, including and between #1 and #2.
= [-#] - Sets the increment between time periods to be processed. For instance, -n-6 would only process every 6th
time period.

-o[Output file name: (concplot.ps)]

The name of the Postscript output file defaults to concplot.ps unless it is set to a {user defined} value. The Python
implementation supports several formats beside postscript. See the description for the --more-formats option
below.
-p[Process file name suffix: (ps) or process ID]

The suffix defines the character string that replaces the default "ps" in the output file name. A different suffix does
not change the nature of the file. It remains Postscript. The suffix is used in multi-user environments to maintain
multiple independent output streams.

-q[Quick DATEM plot file: ( )-none, filename]

By defining the name of a text file in this field where the data values are defined in the DATEM format, the
values given in the file will be plotted on each graphic if the starting time of the sample value falls within the
averaging period of the graphic.

-r[Removal: 0-none, (1)-each time, 2-sum, 3-total]

= 0 - No deposition plots are produced even if the model produced deposition output.
= 1 - One deposition plot is produced for each time period.
= 2 - The deposition is summed such that each new time period represents the total accumulation.
= 3 - Similar to =2, deposition is accumulated to the end of the simulation and but only one plot is produced at the
end.

-s[Species: 0-Sum (1)-Single Pollutant {N}-Species Number]

= {Species Number} - Only one pollutant species may be displayed per plot sequence if multiple species were
created during a simulation. However, an entry of "0" will cause all species concentrations to be summed for
display.

-t[Top display level: (99999) m]

= 99999 - Represents the height (m) above which no data will be processed for display. The level is interpreted
according to the display definition.

-u[Units label for mass: (mass), also see "labels.cfg" file]

Defines the character string for the units label. Can also be modified using the labels.cfg file.

-v[Values (:labels:colors are optional) for up to 10 fixed contours: val1+val2+...val10]

If the contour values are user set (-c4), then it is also possible to define up to ten individual contours explicitly
through this option. For instance -v4+3+2+1, would define the contours 4, 3, 2, and 1. Optionally, a label and/or
color (RGB) can be defined for each contour (e.g. -
v4:LBL4:000000255+3:LBL3:000255255+2:LBL2:051051255+1:LBL1:255051255). To specify a color but not a
label, two colons must be present (e.g. -v4::000000255).

-w[Grid point scan for contour smoothing (0)-none 1,2,3, ... grid points]

Defines if the gridded concentration data are to be smoothed prior to contouring. For instance, a value of 1 means
that each grid point value is replaced by the average value of 9 grid points (center point plus 8 surrounding).

-x[Concentration multiplier: (1.0)]

= 1.0 - No units conversion.


= X - where {X} is the multiplier applied to the air concentration input data before graphics processing.

-y[Deposition multiplier: (1.0)]

= 1.0 - No units conversion.


= X - where {X} is the multiplier applied to the deposition input data before processing.

-z[Zoom factor: 0-least zoom, (50), 100-most zoom]

= 50 - Standard resolution.
= 100 - High resolution map (less white space around the concentration pattern)

Additional supplemental text may be added at the bottom of the graphic by creating a file called MAPTEXT.CFG, which
should be located in the working directory. This is a generic file used by all plotting programs but each program will
used different lines in its display. The file can be created and edited through the Advanced / Panel Labels menu tab.
Units and title information can be edited by creating a LABELS.CFG file which can also be edited manually or through
the Advanced / Border Labels menu tab.

Additional Concplot Command Line Options for Python Implementation

The following options are available only to the Python concplot.

--debug

Print debug messages. This is useful for developers to diagnose an issue.

--interactive

Enter interactive mode. Users can zoom in or move the plot area.

--more-formats=f1[,f2,...]

Specify one or more additional output format(s). This option supplements the output format specified by the -o
option. For example, for -oa.ps --more-formats=pdf,png, three files would be produced, namely, a.ps, a.pdf, and
a.png. Supported formats are eps, jpg, pdf, pgf, png, ps, raw, rgba, svg, svgz, and tif.

--more-gis-options=f1[,f2,...]

Specify one or more additional GIS output format(s). This option supplements the -a option. For example, with -
a1 --more-gis-options=2, both ESRI Generate and Google Earth files will be created.

--source-time-zone

Display dates and times as a local time at the source location. If the option is not given, dates and times will be in
Coordinated Universal Time (UTC).

--street-map[=n]

Show street map in the background. Currently, the option value n may take 0 (TERRAIN) or 1 (TONER). If no
option value is used (i.e., --street-map), n = 0 will be used. This option overrides the -j option.

--time-zone=tz

Show dates and times as a local time at the given time zone tz. The time zone should be listed in the pytz Python
package. For example, it could be US/Eastern, America/New_York, Etc/GMT-5, and so on.

ESRI Shapefile Map Background Files

Another mapping option would be to specify a special pointer file, (originally called shapefiles.txt, but now a suffix
other than "txt" is permitted) to replace the map background file arlmap in the -j command line option (see above). Note
-jshapefiles... rather than -j./shapefiles... is required. This file would contain the name of one or more shapefiles that can
be used to create the map background. The line characteristics (spacing, thickness, color) can be specified for each
shapefile following the format specified below:

Record format: 'file.shp' dash thick red green blue


file.shp = /dir/name of input shapefile in quotes
dash = {0} for solid; {dashes}/in; <0 color fill
thick = line thickness in inches (default = 0.0)
Red Green Blue = RGB values (0.0 0.0 0.0 is black)
Record example for default: 'arlmap.shp' 0 0.005 0.4 0.6 0.8

Table of Contents
Concentration / Display / Grid Values
A generic Postscript equivalent of the Windows-only concentration grid point data display program is available through
this menu. This particular program (gridplot) was designed to plot global sized concentration grids, although any sized
grid may be displayed. Concentration values over the entire grid (or a zoomed area) will be shown using a color-fill in
each grid cell according to its concentration value. The options available through the GUI are setting the input file,
output file, lowest contour value, contour interval, species number, linear or logarithmic scaling, and the longitude
offset, the center latitude, the zoom factor, the concentration multiplier, and the choice of GIS output. By default, each
time period is output to a different file name, gridplot_{???}.html, where the sequence number {???} is incremented by
one for each time period. When called from the Global Grid menu tab, the default output file name is gemplot rather
than gridplot.

Several other options are available through the command line, which may be needed if multiple levels have been written
to the concentration output file. USAGE: gridplot -[options(default)]

-i[input file name (cdump.bin)]


-p[Process output file name suffix]
-o[output name (plot.ps)]
-l[lowest interval value (1.0E-36), -1 = use data minimum]
-m[multi frames one file (0=no) 1=yes]
-c[concentration multiplier (1.0)]
-d[delta interval value (1000.0)]
-s[species number to display: (1); 0-sum]
-h[height of level to display (m) (integer): (0 = dep)]
-j[Graphics map background file name: (arlmap)]
-a[scale: 0-linear, (1)-logarithmic]
-x[longitude offset (0), e.g., o--90: U.S. center; o-90: China center]
-y[latitude center (0), e.g., 40.0: U.S. center]
-f[(0), 1-ascii text output files for mapped values]
-g[GIS: 0-none 1-GENERATE(log10) 2-GENERATE(value) 3-KML 4-partial KML]
+g[graphics type: (0)-Postscript 1-SVG]
-k[KML options: 0-none 1-KML with no extra overlays]
-u[mass units(ie, kg, mg, etc); note these are labels only]
-z[zoom: (0)-no zoom, 99-most zoom]
When called from the Global Grid menu tab, an additional feature is available through the Add Plume checkbox. In this
case, the first plume model concentration grid (if more than one is defined) is added to the global model concentration
grid as defined in this menu. The merged concentration file is called concadd.bin which is then displayed by gridplot.
The two concentration files that are added must be identical in terms of the number levels and pollutants. However, the
spatial grid characteristics may be different. The height of the levels at each index do not have to be identical. If this
issue is important, then the heights heights corresponding to the global model heights needs to be set in the CONTROL
file setup menu. Note that the global model vertical heights cannot be changed by the user and correspond with the
heights of the meteorological data levels. The horizontal characteristics of the output grid always takes the grid
dimensions of the second grid (-b).

USAGE: concadd -[options(default)]

-i[input file name (cdump)]


-b[base file name to which input is added (gdump)]
-o[output file name with base format (concadd.bin)]
-g[radius (grid points around center) to exclude; default=0]
-c[concentration conversion factor (1.0)]
-p[sum process (0)=add | 1=max value | 2=zero mask | 3=intersect]
| 4=replace]
-z[zero mask value (base set to zero when input>mask; default=0.0)]
if zero mask value < 0 :
base set to zero when input> zero mask value * base
Concentration / Display / Particle
In addition to the normal display programs for air concentrations, which are designed to display and contour the actual
values, the particle display programs only show the instantaneous positions of the pollutant particles or puffs that are
integrated in time by the model to produce the air concentration fields. To generate the particle position graphics from
this menu it is necessary to generate a particle dump file (default name: PARDUMP), which may contain one or more
time periods of output. The creation of this file is controlled by the parameters set in the menu tab: "Advanced /
Configuration Setup / Concentration."

The particle display menu requires the name of the particle position input file, the base name of the Postscript output
file, the type of particle display requested, and some additional options. Setting the mass checkbutton changes the size
of the particle display dot according to its mass, the color option sets the dot color rather than just black. The color is set
according to the height of the particle or if the age value is non- zero, then the color is set according to the particle age.
The GIS option is (valid only for the horizontal plot) and it will output either the ESRI generate format file or a Google
Earth KML file. If there are too many particles, the density of the display can be reduced by plotting only every Nth
particle, the default is 1 to display every particle. The Set Cross button permits the specification of the actual cross-
section vector, rather than letting the program determine it automatically. Note that similar to the trajectory and
concentration plotting programs, shapefiles can also be defined for the map background.

The "Plane" option results in the conventional graphic shown below of the horizonal plane view. There is one black dot
for each particle. The size of the dot varies according to the pollutant mass assigned to the particle. All examples are 12
or 24 hours after the start of the test simulation.

The "vertical" option shows an integrated particle distribution view from the south looking north (top panel) and from
the east looking west (bottom panel). All particles in the computational domain are shown.
The "cross-section" view is a combination of the horizontal plane and vertical views. The top panel shows the horizontal
particle distribution, while the bottom panel shows the vertical distribution along the red regression line through the
plume. Again all particles are shown regardless of their distance from the regression line. The bottom panel view is from
left to right in increasing longitude, regardless of the orientation of the regression line.
The global display is a special program that will always map the particles on a global equidistant projection. The other
programs try to automatically scale the plots according the to particle distribution, sometimes resulting in distorted plots
when the particle coverage becomes global. There are additional command line features in this program (parsplot) for
particle size and color mapping according to the particle mass that are not available through the GUI.
Table of Contents
Concentration / Display / Time-of-Arrival
Time of arrival graphic (isochron) shows the time after the start of the simulation that the concentration exceeds the
given threshold concentration at each concentration grid cell. The default contour interval (-1) uses the concentration
averaging period. Otherwise, the time difference field should be set to hours. A text label with release information may
be printed below the threshold through the LABELS.CFG file.

The GUI and the command line options are similar with the command line syntax as follows: USAGE: isochron -
[options(default)]

-c[color fill no:0 yes:(1)] : the =0 option results in contours


-d[difference in hours between contours (-1)]
-f[frames: (0)-all frames one file, 1-one frame per file] : multiple frames can only be due to multiple levels or
species in the input file
+g[graphics type: (0)-Postscript 1-SVG]
-i[input file name (cdump)] : the input file needs to contain multiple time period
-j[graphics map background file (arlmap)] : the shape file option is not available
-n[number of contours (12)] : only 12 contours fit across the display
-o[output name (toa)] : the output file represents all time periods
-p[Process file name suffix: (ps) or process ID] -t[threshold value (1.0E-36)]

In the 144 hour duration example simulation shown below, the concentration field was output every six hours, the
contour interval was set to 12 hours, and the threshold to 0.1E-15 . The one hour duration release took about 72 h to
reach Hudson's Bay and about 144 h to reach into the Atlantic off the east coast of the U.S.
Concentration / Display / Source-Receptor View
This menu option permits the extraction of information for a specific source or receptor if the original model simulation
was configured to produce a source-receptor matrix formatted output file. The model should have been run with the
matrix option set in the configuration menu. This results in a special concentration output file that may be called a
source- receptor matrix, such that each column may be considered a receptor location and each row a pollutant source
location. The display program under this menu tab permits the contouring of any row or column.

If the matrix option was not set in the Advanced Concentration configuration tab, then the output file produced is a
simple concentration output file, where the concentrations from all the sources of the defined matrix are summed on a
single concentration output grid. Source-receptor information cannot be extracted from such a file.

When a location is selected in the menu, a special program is called to extract that location from the concentration
matrix output file and then write a standard concentration file for that location. The standard concentration display
program is then used to display the information in the extracted file. The source-receptor matrix binary extraction file
name will default to SRM_{original file name}, but can be changed in the menu. The Display Matrix menu, shown in
the illustration below, automatically creates the extraction file, calls the standard display program concplot, and
provides for additional custom label information to identify the plot as a source-receptor concentration matrix.
The menu consists of the standard display options such as the map background file, output file name, zoom factor, and
the selection of the extraction method: source or receptor. A latitude and longitude point needs to be entered for all
extractions. Selection of the "source" extraction method means that the location entered is considered to be the source
location and the resulting output is a contour map is just a conventional air concentration simulation showing
concentrations from that source. The "receptor" extraction method means that the location entered is considered to be
the receptor location and the output is a map of how much air concentration each source contributes to the selected
receptor. Note that turning on the "normalization" flag divides all concentrations by the sum of all concentrations at
each grid point, resulting in a map that represents a fractional contribution. In addition, a concentration conversion
factor can be defined, which will be applied to the binary file before it is contoured.

Table of Contents
Concentration / Display / Source-Receptor / Stats
Summary: This procedure is intended to find the source location that provides the best fit with measurement data. After
running HYSPLIT with the matrix option and ICHEM=1, the script extracts each source location from the CONTROL
file, converts the output to the DATEM format, and then calls the statistical analysis program which is converted to a
gridded binary file which can be plotted by any of the standard concentration plotting programs.

Step 1: defines the DATEM formatted measured data file and unit conversion factor required to convert the HYSPLIT
concentration predictions to the same units as given in the measured data file.

Step 2: defines name of the statistical output file that will contain the statistical results from each source location
compared with the measured data values. Normally the statmain program outputs one results file per execution. In this
configuration, the results form each source location are appended to a single file, one record per source.

Step 3: defines the root name of the final output graphics file which will show a plot of the statistical results by source
location. The .ps suffix is created by default.

Step 4: runs sequentially through all steps of the process, reading each source location from the CONTROL file, running
matrix to extract that location from the HYSPLIT simulation output file, using c2datem to convert that binary extract to
the DATEM format, and then running statmain to append the results to the statistical output file. This step needs only to
be run once.

Step 5: defines which statistic will be shown: Corr - correlation coefficient, NMSE - normalized mean square error, FB
- fractional bias, FMS - figure of merit in space, KSP - Kolmogorov Smirnoff parameter, and finally the rank, which is a
combination of the previous parameters. Once all source locations have been processed, the statistical file is read and
converted to binary gridded format using stat2grid. This output file can then be plotted with any of the display programs
such as gridplot.

Default file names: The following describes the assigned default input and output files names for each processing
section, so that they can be identified in the working directory. File names in italics are the ones that can be changed
through the menu.

matrix

cdump - binary outfile from a single multi-source simulation using the ICHEM=1 option
SRM_cdump - binary concentration output file extracted for one source location

c2datem

SRM_cdump - binary concentration input file from the previous step


measured.txt - measured data file in DATEM format
hysplit_datem.txt - model prediction output for a single source converted to DATEM format

statmain

measured.txt - measured data input file in DATEM format


hysplit_datem.txt - model prediction input file for a single source from previous step
sumstats.txt - statistical summary output appended one record per source location

stat2grid

sumstats.txt - statistical summary input file from previous step


statmap.bin - binary statistical output file gridded in the HYSPLIT concentration format

Table of Contents
Concentration / Display / Ensemble / View Map
Multiple concentration output files from the ensemble or variance dispersion simulation should have been previously
processed in the ensemble Create Files tab of the display menu. This step would have created fourteen probability
output files in the working directory. These files are used to generate all graphics. The selected files are then plotted
using concplot according to the menu options. An illustration of the View map menu is shown below.

The menu includes some of the standard display program (concplot) mapping choices, such as background file, output
file name, zoom factor, and employment of FORTRAN- or Python-based implementation.

The menu permits a choice of six different ensemble output display options:

1. The number of ensemble members at each grid point with concentrations greater than zero shows the spatial
distribution of the number of members (cnumb).
2. The mean concentration of all ensemble members (cmean).
3. The variance or the mean square difference between individual members and the mean (cvarn).
4. The coefficient of variation of all ensemble members expressed as the √Cvarn*100/Cmean (ccoev).
5. The probability of concentration produces contours that give the probability of exceeding a fixed concentration
value at one of three levels: 1% of the maximum concentration (cmax01), 10% of the maximum (cmax10), and
the maximum concentration (cmax00). The concentration level for the probability display is shown on the graphic
with the pollutant identification field set to something like C14, where 14 represents the concentration to the
power of 10-14.
6. The concentration at percentile levels shows the concentration contours of areas where concentrations will be
exceeded only at the given probability level. Although several output levels are computed, the probability level
choices through the menu are limited to 50, 90, and 95th percentiles (prob[75|90|95]).

Table of Contents
Concentration / Display / Ensemble / Box Plot
Multiple concentration output files from the ensemble or variance dispersion simulation can be processed to produce
probability displays. The Ensemble / Create Files menu calls a special program conprob that reads the concentration
files with the ensemble member three-digit suffix (001 to 999) and generates various probability files. Once these
probability files have been created, the concentration distribution at a single location can also be viewed using this menu
by entering a latitude-longitude location. The nearest grid point will be selected and the concentration values are
retrieved from the prob{xx} files, which represent the various points on the box plot. In the illustration shown below,
the limits of the box represent the quartiles (25th and 75th percentiles), the whisker extensions show the 10th and 90th
percentiles, and the circles show the 5th and 95th percentiles. The median value is shown by the line through the box,
while the mean is shown by the plus symbol. Up to 12 time periods can be shown in one plot. The graphic is always
named boxplots.html.

A second graphic called ensplots.html is always produced using the individual ensemble concentration files. The abcissa
and ordinate is identical to the boxplot, but the contents show the actual concentration values identified by member
number, where the number is the same as the concentration file suffix.
Table of Contents
Concentration / Display / Ensemble / Statistics
Summary: This procedure is intended to analyze the binary output files from any of the ensemble simulations. The
output file naming convention is such that they are defined by the concentration output file name set in the CONTROL
file to which is automatically appended a three digit sequential number suffix (.000). The ensemble simulation needs to
be for a case in which experimental data are available for verification. The script converts the output to the DATEM
format, and then calls the statistical analysis program which appends a statistical summary for each member to a single
output file.

Step 1: defines the DATEM formatted measured data file and unit conversion factor required to convert the HYSPLIT
concentration predictions to the same units as given in the measured data file.

Step 2: defines name of the statistical output file that will contain the statistical results for each ensemble member
compared with the measured data values. Normally the statmain program outputs one results file per execution. In this
configuration, a subset of the the complete results for each member are appended to a single file. The complete statistics
are available in file named stat_000.txt.

Step 3: runs sequentially through all steps, processing each ensemble member using c2datem to convert that binary
concentration data to the DATEM format, and then running statmain to append the results to the statistical output file.

Default file names: The following describes the assigned default input and output files names for each processing
section, so that they can be identified in the working directory. File names in italics are the ones that can be changed
through the menu.

c2datem

{base name}.{xxx} - binary concentration input file, where xxx = 000 to 999
measured.txt - measured data file in DATEM format
hysplit_datem.txt - model prediction output for a member converted to DATEM format

statmain

measured.txt - measured data input file in DATEM format


hysplit_datem.txt - model prediction input file for a single source from previous step
sumstats.txt - statistical summary output appended one record per source location
stat_{xxx}.txt - complete statistics for each member

Table of Contents
Concentration / Utilities / Binary File Merge
The purpose of the Binary File Merge utility menu is to add together two or more gridded HYSPLIT binary
concentration files, where the contents of the input file are added to the contents of the base file, the results of which are
written to the output file. The input files need not be identical in time, but they must have the same number of pollutants
and levels. Levels and pollutants will be matched by index number and not actual height or species. The horizontal grids
need to be identical or even multiples of each other. This means that the concentration grid center should be explicitly
defined rather than using the default values (0.0). For instance, a half-deg and one-deg concentration grid needs to at
least intersect at the corner points. Summations will occur regardless of any grid mismatches. The files are merged at
each grid point according to the processing rules: a simple summation, selecting the maximum value, or the sum only at
the intersection points. An additional option, called masking, is defined to be such that a concentration value greater
than the mask value (normally 0.0) in the input file, will cause the base file to be set to 0.0 at that grid point. The GUI is
shown below.
If the Create Filenames button is selected, an special file called INFILE is created that contains all the file names in the
working directory that have the wildcard sequence as defined in the Input Name box. These files are then merged
sequentially according to the process rules. Note that the concentration multiplier is applied during the writing of the
output file and during sequential file processing this conversion is disabled.

The Set sampling time labels as min-max checkbox results in the sample times in the output file to contain the minimum
time between the input and base files for the sample start time and the maximum time between the input and base files
for the sample stop time. An example application of this feature would be when multiple files at different times are
processed into a single file and the desired sample output duration is from the beginning of the first file to the end time
of the last file.
The above example shows how two plume simulations have been merged. Each was run for a different day and starting
location, however the graphic and its corresponding binary output file, use only the information provided in the base file
and the labeling information from the input file has been lost.

In the above example, the input and base files are identical to the previous illustration, but in this example the mask
checkbox option was selected and the result clearly illustrates the region of the base plume that was intersected by the
input plume.
The result from the intersection option is shown in the above figure. In this result only the region where the two plumes
intersect and both exceed the zero mask value is shown. The region is identical to the missing plume area in the mask
option.

Table of Contents
Concentration / Utilities / Binary File Extract
Calls the utility program concrop to extract a sub-grid from a HYSPLIT binary concentration data file. The new subgrid
is written to the file conxtrct.bin. The utility program can be used to remove the whitespace by resizing the grid about
the non-zero concentration domain or by specifying a sub-grid. Using the -x option will force the extract of the users
domain regardless of white space.

USAGE: concrop -[options (default)]


-b[latmin:lonmin:latmax:lonmax (all zeroes will produce full grid extract]
-d[process id (txt)]
-f[FCSTHR.TXT: (0)=none 1=output]
-i[Input file name: (cdump)]
-g[Output plume grid limits rather than cropped file: (0)=no; 1=center-radius -1=from{-b}]
-o[Output file name: (ccrop)]
-p[time extract MMDDHH:MMDDHH]
-x[override white space: (0)=no (default) 1=yes]

Table of Contents
Concentration / Utilities / Binary File Average
This menu calls the utility program conhavrg to spatially average the HYSPLIT binary concentration file and write a
new binary file. Each grid cell is replaced by the average value of the center cell and the surrounding cells specified by
the scan radius. The default scan radius of 1 means that 9 grid cells will be averaged together.

USAGE: conhavrg [options]


-i[input file name (cdump)]
-o[output file name (cavrg)]
-s[surrounding grid points to average (1)]

Table of Contents
Concentration / Utilities / Binary File / Apply Source
When multiple simulations are conducted, each one representing a different emission, the model results can be analyzed to
determine which simulation provides the best fit with any available measurements, or more specifically, what emission rate is
required with each simulation to provide the best fit with the measurements. These simulations can each represent a different
emission time or location or a combination of both. Currently the code is configured to automatically generate multiple time-
varying emissions using the namelist parameters QCYCLE and ICHEM.

As an example, if hourly (the minimum) resolution is desired over a 24 hour simulation period, then the emission rate needs to
be set to emit one one unit for a duration of one hour. The namelist variable QCYCLE should also be set to 1.0 hour so that the
emissions effectively become continuous at a rate of one unit per hour. This in combination with ICHEM=10 will result in the
creation of 24 concentration arrays with each particle being tagged according to its release hour and those particles will only
contribute to concentrations on the grid with the same release-time tag. The pollutant identification field is used and its value
corresponds to hours and tenths (no decimal) after the start of the simulation. Only 4 digits are available which limits the
duration to 999.9 h. The output concentration grid will appear to be like any other but with 24 pollutants, one for each release
period as defined by the QCYCLE parameter. Make sure that a sufficient number of particles have been released to provide
consistent results for all time periods.

Once the simulation has completed, the unit source simulation can be converted to concentrations by applying an emission rate
factor for each release time. This is accomplished by clicking on the Concentration / Utilities / Binary File / Apply Source menu
tab. This menu is only intended to work with a single multi-pollutant concentration file created in the manner described
previously.

The input required is simply the name of the Transfer Coefficient Matrix (TCM) input file (the previously created multi-
pollutant=multi-time-period) input file, the base name of the single pollutant output file which will contain the concentration
sum of all the release periods after they are multiplied by the source term, the 4-character pollutant identification field, and the
name of the time-varying source term file it it exists, otherwise a new file can be created. An emission rate needs to be defined
for each release time, otherwise it is assumed to be zero.
The emissions file format is simply the starting time of the emission given by year, month, day, and hour, and the emission rate
in terms of units per hour. The first record of all emissions file should contain the column labels then followed by the data
records, one for each release time period. For example, a constant emission rate corresponding to the 12 hour example test
simulation (16 October 1995) would appear as follows:

YYYY MM DD HH TEST
1996 10 16 00 1.00E+12
1996 10 16 01 1.00E+12
1996 10 16 02 1.00E+12
1996 10 16 03 1.00E+12
1996 10 16 04 1.00E+12
1996 10 16 05 1.00E+12
1996 10 16 06 1.00E+12
1996 10 16 07 1.00E+12
1996 10 16 08 1.00E+12
1996 10 16 09 1.00E+12
1996 10 16 10 1.00E+12
1996 10 16 11 1.00E+12

Command Line Options - tcmsum

The underlying binary file conversion program can be run from the command line. As with all other applications, the command
line argument syntax is that there should be no space between the switch and options. The command line arguments can appear
in any order:

tcmsum -[option {value}]

-c[column number in emissions file (1)]


-i[input file name (cdump)]
-o[output file base name (tcmsum)]
-d[output file name suffix (bin)]
-h[half-life-days (0.0 = none)]
-p[pollutant sum label (SUM)]
-s[source term input file name (cfactors.txt)]
-t[time decay start: MMDDHH (run_start)]
The procedure described in this section is very similar to the transfer coefficient solution approach with the exception that here
the dispersion coefficients for the individual release times are contained in a single file while in the other approach an input file
is required for each release time simulation. The single file created here can be processed through the utility program conlight
to extract one time period to its own file with each pass through the program. Currently no GUI script application exists to
perform this task automatically.

Table of Contents
Concentration / Utilities / Convert to ASCII
The Convert to ASCII menu option uses the con2asc program to convert the binary concentration file to a simple ASCII
file composed of one record per grid point for all grid points where concentrations at any level are non-zero.
Concentrations for multiple levels and pollutant species are all listed on the same record for each grid point. The
primary purpose of the conversion is to create a file that can be imported into other applications. An illustration of the
GUI menu is shown below for the sample concentration simulation.

The Concentration Setup menu determines the file name selection option on the GUI. There are some additional
checkboxes that correspond to various command line conversion options: con2asc -[options (default)]

-c[Convert entire file flag]

This option converts the entire binary input file, including all index records, to an ASCII output file with the name
{input file}.txt. This option is not available through the GUI (use -s below). Another program, called conread,
also not available through the GUI, can also be used to dump out the contents of the concentration file. This
program can only be run from the command line. For the -c option, the output file format follows the binary file
format record-by-record using the following conventions.

Meteorological model and starting time - A4, 6I4


Starting time and locations - 4I4, 2F8.3, F10.1
Concentration grid and spacing - 2I4, 2F8.4, 2F8.2
Vertical grid index record - I4, 20I8
Pollutant identification record - I4, 20A4
Sample start time - 6I4
Sample end time - 6I4
Concentration record - A4, I6, 255(255E10.2)

-d[Delimit output by comma flag (rather than space delimited)]

Set the Comma Delimited checkbox.


-i[Input file name (cdump)]

-m[Minimum text output format flag]

Setting this flag turns off the writing of the first output record, which is the column label field: DAY HOUR LAT
LON SPECIES-LEVEL. This option corresponds to the Minimum Text checkbox of the GUI menu.

-o[Output file name base (Input File Name)]

The default base name for the output file is the input file name. A new output file is created for each sampling
period, where the name of the file is composed of the {base name}_{Julian day}_{hour} of the sample ending
time. If the Include Minutes box is checked, then the minutes field of the sample ending time is added to the end
of file name. This field is not available through the GUI. The name of each output file is always written to the file
named CON2ASC.OUT regardless of any other options selected.

-s[Single output file flag]

This option corresponds to the Single File checkbox of the GUI menu. The multi-time period ASCII output file is
named {input file}.txt.

-t[Time expanded (minutes) file name flag]

-u[Units conversion multiplier for concentration (1.0)]


-U[Units conversion multiplier for deposition (1.0)]

Values other than one in either of these fields will result in the concentration or deposition results to be multiplied
by these factors prior to output. Deposition fields are identified by a zero in the level height field.

-x[Extended precision flag]

The format of each record in the output file is given by:


2I3 - End of Sample: Julian Day and Hour
F7.2, F8.2 - Latitude and Longitude of grid cell
45E9.2 - Concentration data (by level and pollutant)

If the Extended Digits checkbox of the precision options is set, then the latitude and longitudes will be given by
four digits after the decimal place.

-z[Zero points output flag]

Setting the zero flag causes the program to output the concentration values all all grid points, including the ones
that are zero. This corresponds to the Include Zero checkbox of the GUI menu.

Each output record is identified by the day (Julian: 1 to 365) and hour (UTC) of the ending time of each sample. The
ASCII conversion of the first file generated by the sample calculation is shown below in the illustration.
Table of Contents
Concentration / Utilities / Convert to DATEM
This menu calls a utility program to convert the HYSPLIT binary concentration file to the DATEM data format and
compares the model predicted results with the actual measured data. The Data Archive of Tracer Experiments and
Meteorology (DATEM) web page contains additional details about each of the experiments, the data formats, and more
complete descriptions of each of the programs used to convert the data and generate the statistical results. The on-line
repository has links to several different tracer experiments and their associated meterological data. All experimental data
have been converted to a common format. The meteorological data are compatible for direct use by HYSPLIT.

To facilitate application of the analysis software, all the experimental data from the second CAPTEX release are provide
in the ./hysplit/datem directory. Included are the control, namelist, measured data, and meteorological data files. The
meteorological data is taken from the North American Regional Reanalysis. The computational sequence is as follows:

1) Configure and Run HYSPLIT

Go to the concentration setup run menu and retrieve ./datem/captex.cnt


Go to the advanced configuaration concentration menu and retrieve ./datem/captex2.cfg
Save and then Run Model

2) Convert HYSPLIT run results to DATEM format

Browse to load the measured data file ./datem/captex2.txt


Create the DATEM file hysplit.txt from the binary file hysplit2.bin
Compute the Statistics and view the Scatter Plot

In the first section of the menu, a measured data file already in DATEM format should be selected. The input data file
represents the HYSPLIT binary concentration output file which is then converted to a DATEM formatted text file using
the units conversion factor. The conversion program will match each measurement with a model calculated
concentration corresponding to the location and time duration of the measurement. Note that the model temporal output
frequency needs to be the same or of finer resolution than the measurement periods, otherwise calculations cannot be
matched with the measurements.

The rotation angle entry field (default = 0.0) can be used to test the sensitivity of the model prediction to small changes
in wind direction. Setting a non-zero angle will result in the rotation of the calculated plume with respect to the
sampling network in the direction specified. The rotation is computed using the first source location specified in the
setup menu. The effect of this test is comparable to rotating the transport wind direction by the same amount. However,
this sensitivity test only makes sense in the context of plumes that have a relatively homogeneous and stationary
structure.

Once the model output file in DATEM format has been created, the statistical evaluation program should be run to
compare the model calculations with the measurements in terms of various performance statistics. Two data processing
options are available. For verification statistics, all values can be compared, only plume values (both measured and
calculated values are greater than zero), or all values but excluding pairs where both measured and calculated values are
zero. In terms of averaging, the measured and calculated values can be used directly, temporally averaged resulting in
one value for each location, or spatially averaged resulting in one value for each time period. The contingency level is
the concentration value required to be exceeded to compute the false alarm rate, probability of detection, and threat-
score. The contingency level is also used as the plotting threshold for the measured-calculated scatter diagram which is
created by the Scatter Plot menu button.

Note that simulations with multiple species and levels are not supported. The concentration data files follow no specific
naming convention and any input or output file name may be selected in the menu. Arbitrary input file names can only
be set through the Setup / Grids menu. The statistical output files are always written to stat{A|T|S}.txt where the
character A|T|S represents All data, Temporal averaging, or Spatial averaging. An additional output file, data{A|T|S}.txt
is created for the scatter diagram plot, which is always named scatter.html. The Rename output field in the GUI adds
those text characters between the above prefix and suffix fields: {stat|data}{A|T|S}_{rename}.txt

DATEM format description

Record 1 (ASCII) - file identification information


Record 2 (ASCII) - column identification information
Record 3 ... end - individual data records
Field 1 (I4) - start year of sample
Field 2 (I3) - start month of sample
Field 3 (I3) - start day of sample
Field 4 (I5) - start hour minute of sample (HHMM)
Field 5 (I7) - sample duration hours minutes (HHHMM)
Field 6 (F10) - sample location latitude (fractional degrees)
Field 7 (F10) - sample location longitude (fractional degrees)
Field 8 (F10) - sample value
Field 9 (I10) - sample station identification (integer)
Field 10 (F10) - sample height (optional; only used in this utility program)

Table of Contents
Concentration / Utilities / Convert to Dose
Converts a HYSPLIT binary concentration and deposition file to dose in rem/hr. The HYSPLIT calculation should be
done using a unit emission so that the concentration units are m-3 and the deposition units are m-2. This post-processing
step reads the file activity.txt which contains the activity (Bq) at time=0 for all the isotope products. Sample activity.txt
files can be created by the program for a high-energy nuclear detonation or a thermal fission reaction from a power plant
reactor. The file also contains the half-life and cloud- and ground-shine does conversion factors for each radionuclide
specified in the file.

For nuclear detonations, the activity.txt file contains columns for either a high-energy or thermal fission reaction
assuming a yield of 1 kT. The emission factor can be specified as a multiple of 1 kT. Activation products resulting from
the fission are not considered. Two fuel sources (U235 and Pu239) are available for each reaction type. The fission yield
data were obtained from T.R. England and B.F. Rider, Los Alamos National Laboratory, LA-UR-94-3106; ENDF-349
(1993). The external dose rate for cloud- and ground-shine is computed from the factors given by Eckerman K.F. and
Leggett R.W. (1996) DCFPAK: Dose coefficient data file package for Sandia National Laboratory, Oak Ridge National
Laboratory Report ORNL/TM-13347.

During the processing step, the cumulative product of the activity and dose factor is computed for each decay weighted
concentration averaging period, independently for noble gases and particles. Command line options exist to turn off the
dose calculation but still multiplying the dispersion factors by the activity, resulting in air concentration output. If the
dispersion factors input file contains only one pollutant type and level, then an option can be set to output each of the
species defined in the activity.txt file. Concentration units may also be converted from Bq to pCi.

For nuclear power plant accidents, the same fission product table can be used by entering the duration of the reactor
operation in terms of mega-watt-hours based upon the relation that 3000 MW-hours releases an energy equivalent to
2.58 kT. An alternative approach is to generate an activity.txt file that corresponds with the radionuclide release profile
of the reactor accident. The command line option -afdnpp will create a sample activity.txt file that corresponds to the
maximum emissions over a 3-hour period during the Fukushima Daiichi Nuclear Power Plant accident for the 10 most
critical radionuclides for short-term dose calculations.
Post Processing Program CON2REM command line options (default):

-a[activity data file name (activity.txt) or {create|fdnpp}]

Names the input file describing activity associated with each species and the dose conversion factors. The -acreate
option will create a sample activity.txt file for a 1kT detonation with just a few species to illustrate the format.
The -afdnpp option will create a file with the top ten radionuclides contributing to dose during the Fukushima
Daiichi Nuclear Power Plant accident.

-b[breathing rate for inhalation dose in m3/h (0.925)

The breathing rate is used as a multiplier for the calculation of the inhalation dose factor. This is only used for the
cloudshine dose calculations.
-c[Output type: (0)-dose, 1-air conc/dep]

If the flag is set to one, the dose conversion factors are set to one and the output will be concentration,the product
of the input air concentration (from a unit emission) times the activity for each radionuclide defined in activity.txt.
The values are summed for all species in each pollutant class (NGAS or RNUC).

-d[Type of dose: (0)=rate (R/hr) 1=summed over the averging period (R)]

When computing dose, the option is to output a rate or the total dose accumulated over each concentration
averaging period. Note that dose from deposition is always summed over the duration of the simulation.

-e[include the inhalation dose in the calculation (0)=No 1=Yes]'

This is used in conjunction with the -b option. The default is not to include the inhalation dose as part of the
cloudshine calculation. If this is turned on, insure that the inhalation dose factors are correctly defined in the last
column of activity.txt.

-f[fuel (1)=U235 2=P239]

This flag selects which column of the activity.txt file will be used to determine the emission amount. The first two
columns are for U-235 fission and the last two columns are for Pu-239 fission. There are columns for high-energy
or thermal fission (see -p flag). Also see the -y and -w flags to scale the emissions to the event.

-g[decay type: 0=normal {c=1}, (1)=time averaged decay]

Prior to computing the dose, the activity is time-decayed to either the end of the sampling time (0=normal) or as
the time averaged activity between the start and end time of each sampling period (1=time-averaged, the default).
Normal decay can only be applied to air concentration calculations. Dose output always requires time-averaged
decay.

-h[help with extended comments]

Provides a short narrative of the functionality of this program with a list of all the command line options.

-i[input file name (cdump)]

The name of the HYSPLIT air concentration/deposition binary output file. The simulation can contain both air
concentrations and deposition field. Proper conversion of the model output fields to dose requires a unit release
rate in the HYSPLIT calculation.
-n[noble gas 4-char ID (NGAS)]

The program searches for this pollutant 4-character identification to use as the dispersion factor for noble gases.
All other ID values (RNUC, etc) are assumed to be for particles. All species will be summed to either the NGAS
or the other dispersion factor unless species matching is defined (-s1).

-o[output file name (rdump)]

The output file will contain the same number of levels and time periods as the input file but with the dispersion
factors (unit concentrations) converted to dose or species concentration according to the settings of the other
command line options.

-p[process (1)=high-energy 2=thermal]

Defines which column of the activity.txt file to use for a specific fuel type. Option 1 is for high-energy fission
(columns 1 or 3) and option 2 is for thermal fission (columns 2 or 4). In the case of the FDNPP activity.txt file,
the activity values are the same in all columns. This feature may be used with customized activity.txt files to
select different source options. Also see the -f flag for selection the fuel type.

-q[convert dose from rem=0 to sieverts=(1)]

The default dose output is in REM because the conversion factors are defined in the activity.txt file in REM/Bq.
Setting this value to one converts the dose output from REM to Sieverts (100 REM = 1 Sievert).

-s[(0)=sum species, 1=match to input, 2=output species]

The default value (0) sums each species in the the activity.txt file to its corresponding pollutant class (NGAS -
noble gas, or RNUC - radionuclide particle). The match to input option (1) just does a one-to-one match of the
pollutant ID in the activity.txt file with the pollutant IDs used in the simulation. Pollutant IDs can be 4-digit
integer number or character for option (1). In option (2), the output file can be expanded to include a
concentration for each species defined in the activity.txt file using the dispersion factors from its pollutant class
from the simulation. In this configuration, HYSPLIT can only be run with one level and species.

-t[0=no decay input decayed, or decay by species half-life (1)]

The default (1) is for all species to be decayed to the valid time of the concentration output from the start time of
the simulation which is assumed to be the time fission ceased. The time of release to the atmosphere can occur
much later. The no decay option is only required when decay has already been applied in the dispersion
calculation. In general, decay should only be set in the dispersion calculation for short-duration releases which
coincide with the termination of fission, otherwise decay may be computed incorrectly. Computing decay within
this post-processing step is the most accurate way to approach dose simulations.

-u[units conversion Bq->pCi, missing: assume input Bq]

When air concentration output is selected, then the output units can be converted from Bq to pCi.

-w[Fission activity in thermal mega-watt-hours, replaces the -y option value]

If the activity.txt file represents the number of fissions per kT, and the -w field is non-zero, then the activity is
computed with respect to the number of MWh generating the fission products. The assumption is that a 3000 MW
reactor creates about 2.58 kT per hour of fission products.

-x[extended decay time in hours beyond calculation (0)]

Additional decay time can be added using this field to estimate long-term doses. This calculation applies to either
-t option but only for deposition doses. This option permits longer-term doses to be computed by adding
additional decay to the existing output files.

-y[yield (1.0)=kT]

The default assumes that the activity levels defined in activity.txt represent the emissions from a 1 kT device.
Other values are computed as a multiplier to the values defined in the table. The multiplier can be used in the
context of a detonation in terms of kT or a multiple of the 3-hour emission maximum from the Fukushima Daiichi
Nuclear Power Plant accident.

-z[fixed decay time in hours (0)]'

The fixed decay is computed to the end of the sample time in the output file regardless of the actual simulation or
release start time. All output periods, regardless of sampling time, will be decayed for these number of hours.

An example of the activity.txt file for a 1 kT detonation:

Mass Nucl T1/2 U235H U235T Pu239H Pu239T Cloudshine Groundshine Inhalation
Hr= 0.00 sec Bq Bq Bq Bq rem/h|Bq/m3 rem/h|Bq/m2 rem/Bq
85 Kr 3.38613E+08 9.44699E+11 7.53098E+11 5.74803E+11 3.27318E+11 9.18000E-11 0.00000E+00
0.00000E+00
90 Sr 9.18326E+08 4.50386E+12 5.67152E+12 2.06059E+12 2.06059E+12 3.53880E-11 5.90400E-13
2.38000E-06
131 I 6.94656E+05 5.31842E+15 3.74884E+15 5.64272E+15 5.00710E+15 6.08400E-09 1.31040E-10 7.38000E-
07
133 Xe 4.52995E+05 1.10002E+16 1.33275E+16 9.66744E+15 1.39641E+16 5.00400E-10 0.00000E+00
0.00000E+00
137 Cs 9.52093E+08 4.66591E+12 5.85842E+12 4.21162E+12 6.25592E+12 3.34080E-11 1.07640E-12
4.63000E-07
140 Ba 1.10160E+06 3.68093E+15 5.07968E+15 3.02654E+15 4.37621E+15 2.90520E-09 6.84000E-11
1.03000E-07
140 La 1.44979E+05 2.81554E+16 3.86593E+16 2.38668E+16 3.33141E+16 3.99600E-08 7.77600E-10
1.07000E-07

An example of the activity.txt file for the Fukushima-Daichii Nuclear Power Plant Accident for the top ten
radionuclides important for short-term dose calculations. The activity represents the maximum number of Bq released
over a 3 hour period.

Mass Nucl T1/2 U235H U235T Pu239H Pu239T Cloudshine Groundshine Inhalation
Hr= 0.00 sec Bq Bq Bq Bq rem/h Bq/m3 rem/h Bq/m2 rem/Bq
95 Nb 3.02000E+06 1.62000E+13 1.62000E+13 1.62000E+13 1.62000E+13 1.26000E-08 2.62000E-10
2.38000E-06
110 Ag 2.18000E+07 6.48000E+12 6.48000E+12 6.48000E+12 6.48000E+12 4.57000E-08 9.29000E-10
2.38000E-06
132 Te 2.82000E+05 1.62000E+16 1.62000E+16 1.62000E+16 1.62000E+16 3.36000E-09 7.63000E-11
7.38000E-07
131 I 6.94656E+05 8.10000E+15 8.10000E+15 8.10000E+15 8.10000E+15 6.08400E-09 1.31040E-10 7.38000E-
07
133 I 7.49000E+04 8.10000E+15 8.10000E+15 8.10000E+15 8.10000E+15 9.94000E-09 2.22000E-10 7.38000E-
07
133 Xe 4.52995E+05 3.66000E+17 3.66000E+17 3.66000E+17 3.66000E+17 5.00400E-10 0.00000E+00
0.00000E+00
134 Cs 6.50000E+07 8.10000E+14 8.10000E+14 8.10000E+14 8.10000E+14 2.54000E-08 5.33000E-10
4.63000E-07
137 Cs 9.52093E+08 8.10000E+14 8.10000E+14 8.10000E+14 8.10000E+14 3.34080E-11 1.07640E-12
4.63000E-07
140 Ba 1.10160E+06 4.05000E+13 4.05000E+13 4.05000E+13 4.05000E+13 2.90520E-09 6.84000E-11
1.03000E-07
140 La 1.44979E+05 4.05000E+13 4.05000E+13 4.05000E+13 4.05000E+13 3.99600E-08 7.77600E-10
1.07000E-07

As an example, a HYSPLIT simulation could be configured with two species, NGAS for the noble gases and RNUC for
the radio-nuclide particles. After the emissions stop, the total emissions of both NGAS and RNUC should equal one.
Then in the post-processing step, con2rem is called and the concentration and deposition fields are multiplied by the
total emission and dose conversion factor for each species and added together to get a ground-shine and cloud-shine
dose file, which can then be plotted using any of the standard HYSPLIT display programs. Dose results, by species, are
decayed to the valid time of the output graphic. An additional time factor can be added for long-term dose estimates.

One aspect of applying the decay in the the post-processing step is that the decay times are referenced to the start of the
simulation rather than the start of the release. This means that the radionuclide release rate is valid at the start of the
simulation time. This might be the best configuration for a nuclear power plant accident, where the fission is assumed to
have stopped at the start of the release (and simulation). However in other situations, where emissions might be
occurring in concert with some other process, then starting the decay at the time when the radionuclides are released
might be a more appropriate configuration. If the decay is computed during the model calculation for each species
released then use the con2rem options -s1 -t0 to apply just the emission factors and not decay in the post-processing
step.

Table of Contents
Concentration / Utilities / Convert to IOAPI
This menu calls a utility program to convert the HYSPLIT binary concentration file to the IOAPI gridded data format.
Multiple species, levels, and time periods are supported. The conversion program is currently only available on UNIX
or LINUX systems. The data files follow no specific naming convention and any input or output file name may be
selected in the menu. Arbitrary input file names can only be set through the Setup / Grids menu.

Table of Contents
Concentration / Utilities / Convert to Station
The purpose of the Convert to Station utility menu (also called time series data extraction) is to list concentrations at
specific latitude-longitude locations by extracting that information to a text file. The menu also has an option to produce
a time series plot at one or more of the stations or a KML formatted file suitable for display by Google Earth. An
illustration of the menu is shown below. The concentration grid names are determined from the Concentration / Setup
Run menu tabs and therefore the setup menu needs to be called prior to opening this menu. The species and height
values are index numbers starting at one. A height index of 1 is the data at the first output level, regardless of its actual
height value. The date field can be written as a fractional Julian day (for plotting) or using a MM/DD/YYYY field (for
spreadsheets) with or without an index record. The no index record option is used to concatenate multiple files.

The TCM checkbox is used to flag the input file as having been created using the ICHEM=10 namelist option. There is
no specific marker in the concentration to indicate that the file is in this format. The file contains multiple pollutants,
each associated with a different time of release. The TCM flag also causes the first output time group with each record
to be associated with the time of release rather than the time of the sample collection. The second time group of the
record becomes the start time of the sample collection.
Similar to the other menus in the utility section, the input concentration file must be defined in the Concentration Setup
menu. The menu options correspond to the command line options of the con2stn extraction program. There are two
options that can be used to define an extraction location. A station location can be defined directly as an entry in the
menu, or a list of stations can be defined using an input file. If the input file does not exist, it can be created by using the
New File button. In this example illustration, a file has been defined with three locations that are within the plume of the
example simulation. The file consists of three records, one for each station:

The extraction program is called con2stn and for these three stations, produces the output file shown below, called by
default con2stn.txt. The base name of the output file (con2stn) can be changed in the menu. In contrast to the simulation
shown in all the previous examples, in this case the output averaging time was decreased from 12 hours to one hour, to
generate a smoother looking graphic.

The output file shows the Julian day, month, day, and hour of the sample start; day and hour of sample ending time, and
the concentrations for each station (location selected by latitude-longitude). The format of each output record is as
follows:

F8.3, 6I4 - Starting: Julian day, year, month, day, hour; Ending: day, hour
XF10 - Concentration value at X stations

The lower section of the GUI is used to create a simple time series concentration plot of the concentration time series or
an output file (con2stn.kml) suitable for display by Google Earth. The con2stn.txt file is created first which is then read
by the Google Earth conversion program (stn2ge).

For time series data, one or more stations may be plotted. The plotting feature is also available through the command
line. The option is selected from the Display Time Series checkbox. The program, timeplot, reads the data file produced
by the con2stn conversion program and plots the concentration values to the timeplot.ps output file. The illustration for
the previous text file is shown below.
There are only two plot options supported through the GUI: linear or logarithmic ordinate scaling. If integer ordinate
units are desired then it may be necessary to specify a units conversion factor, in this case 1015, to create data in the text
file that can be plotted. With the log scaling option, the conversion factor can be set to 1.0, and the ordinate scale will
cover the appropriate order-of-magnitude range to plot all the data.

Command Line Options - con2stn

The program can be run from the command line or through interactive prompts from the keyboard. The command line
argument syntax is that there should be no space between the switch and options. The command line arguments can
appear in any order: con2stn -[option {value}]

-a[rotation angle:latitude:longitude]
-c[input to output concentration multiplier]
-d[mm/dd/yyyy format: (0)=no 1=w-Label 2=wo-Label]
The default date format is to write a fractional Julian date in the first output field. This format is required
for plotting purposes. However, a more conventional month/day/year format can be selected, which is more
compatible with spreadsheet applications. In the first option (=1) an informational header record is written.
In the other option (=2) the header record is excluded, which is more convenient when appending multiple
files.
-h[half-life-days (one -h entry for each pollutant)]
-i[input concentration file name]
Unspecified file names will result in a standard input prompt.
-m[maximum number of stations (200)]
-o[output concentration file name]
-p[pollutant index (1) or 0 for all pollutants]
Level and pollutant index values can be selected for files with multiple levels and species. Setting zero (0)
for either index forces the output to be the record format rather than the column format. Record format
contains the index numbers for the pollutant and level.
-r[record format 0=record (1)=column 2=datem]
The default output format is to create multiple columns, one for each station, for a selected pollutant and
level. Setting the value to one formats the output as one record for each station, level, and pollutant. A value
of two sets the output to the DATEM format.
-s[station list file name (contains: id lat lon)]
The station positions can be read from a file (space or comma delimited) with the first field being an integer
that represents the location identification, followed by the decimal location in latitude and longitude.
-t[transfer coefficient processing]
-x[(n)=neighbor or i for interpolation]
The default interpolation method (-xn) is to use the value at nearest grid point to each latitude-longitude
position, otherwise bilinear interpolation (-xi) is used.
-z[level index (1) or 0 for all levels]

Command Line Options - stn2ge [option {value}]

-i[input text file name] contains data in the format output from con2stn
-s[station list file name] same file used as input to con2stn
-o[google earth output filename (less .kml)]

Command Line Options - timeplot [option {value}]

-i[input concentration file name] contains data in the same format as output from con2stn
+g[graphics type (0) for Postscript output or 1 for HTML output with SVG]
-m[minimum ordinate value (0.0)]
-n[sequential station number]
For files with multiple stations select the station to plot; default for multiple stations is to plot all stations;
several stations can be selected to plot by appending station numbers with the plus symbol: hence -n3+5+6
will plot stations 3, 5, and 6.
-p[draw only points, no connecting line]
-y[The default is linear ordinate scaling. The flag sets y-axis log scaling.]
-z[three character time label]

Note that multiple pollutants and levels are not supported. Timeplot labels can also be customized by creating a
LABELS.CFG file. See the Border Labels discussion for more information on this file format.

Table of Contents
Convert particle position file to concentration file (S350)
This program reads the HYSPLIT binary particle output file (PARDUMP) and recalculates the concentrations using the
CONTROL file to determine grid dimensions, times, and output file names. Par2conc is used in conjunction with the
namelist variable NDUMP < 0 ( input and output of particle files).

The name of the particle positions output file is specified on the command line as follows:

USAGE: PAR2CONC -[options(default)]


INPUT PARAMETERS: -i[input file name (PARDUMP)]

Table of Contents
Concentration / Utilities / Particle Adjustment
This menu calls a utility program that can be used to adjust the particle positions prior to restarting the simulation from a
saved particle position file. The particle position file needs to be written by setting the appropriate namelist parameter as
described in the Advanced/Configuration Setup/Concentration menu tab. Normally observational data should be
consulted prior to making any adjustments to the particle positions. Positions are only shifted horizontally and not
vertically. The same horizontal adjustment is applied at all heights. If a height adjustment is required, the model should
be re-run with a different initial particle height distribution.

One or more particle files may be processed by a single execution. A standard simulation only generates one particle
output file containing positions at one or more time periods. Multiple particle position files, with a three-digit suffix
(.001, .002, etc), are automatically generated by the ensemble version of the model. The adjusted output is always
written to a new file name, defaulting to PARINIT, also the default name for initializing a new simulation.

Regardless of how many time periods are contained in the particle position file, the adjustment is only applied to one
time period, normally the initialization time period for the next simulation. If the time field (-t) is not specified, then the
adjustment occurs to the first time period in the file, otherwise the adjustment is applied to the MMDDHH designated in
the time argument. Only this one time period is written to the output file, regardless of how many time periods are
contained within the input file.

A particle position shift can be specified as an angular adjustment in degrees and distance from a fixed location (such as
the particle source point). A shift may also be specified as a window translation, where all particles within a given
latitude-longitude window are shifted by a specified number of degrees in latitude and longitude. No particles outside of
the window or beyond the radius are shifted unless the blending (-b) flag is set. In this case, the shift is applied to each
particle in a linearly decreasing fashion to zero adjustment at a distance of two windows.

The spatial adjustment of the particle positions in the HYSPLIT binary particle output file are specified on the command
line (or through the GUI) as follows:

USAGE: parshift -[options (default)]


-b[blend shifting outside of the window or distance range]
-i[input base file name particle positions (PARDUMP)]
-o[output base file name of adjusted particles (PARINIT)]
-r[rotation shift (degrees:kilometers:latitude:longitude)]
-s[search for multiple input files with .000 suffix attached to base name]
-t[time MMDDHH (missing field then process first time only)]
-w[window translation corners lat1:lon1:lat2:lon2 (-90.0:-180.0:90.0:180.0)]
-x[delta longitude for window translation (0.0)]
-y[delta latitude for window translation (0.0)]

The particle position file may be converted to a binary concentration file through the command line utility program
par2conc.

Table of Contents
Concentration / Utilities / Transfer Coefficient
Overview: This menu can be used to solve the transfer coefficient matrix for the source term vector given a measured
data vector and where the matrix values are the dilution factors for each source-receptor pair. The measured data vector
at multiple receptor locations and/or times is required and it must be defined in the DATEM format. The format of this
file is discussed more detail in the GeoLocation menu. If a time-varying source solution is required, the Run Daily menu
can be used to generate the required dispersion simulations. Due to model errors and insufficient sampling not all
simulations will yield a solution. Results are written to the file source.txt.

Theory:Assume that the concentration at receptor R is the linear sum of all the contributing sources S times the dilution
factor D between S and R:

S1D11 + S2D12 = R1
S1D21 + S2D22 = R2

The dilution factors are defined as the transfer coefficient matrix. The sum of each column product SiDij shows the total
concentrations contributed by source i to all the receptors. The the sum of the row product SiDij for receptor j would
show the total concentration contributed by all the sources to that receptor. In this situation it is assumed that S is known
for all sources. The dilution factors of the coefficient matrix are normally computed explicitly from each source to all
receptors, the traditional forward downwind dispersion calculation.

In the case where measurements are available at receptor R and source S is the unknown quantity, the linear relationship
between sources and receptors can be expressed by the relationship:

Dij Si = Rj,

which has a solution if we can compute the inverse of the coefficient matrix:

Si = (Dij)-1 Rj.

For example, in the case of an unrealistic 2x2 square matrix (the number of sources equals the number of receptors), the
inverse of D is given by:

| +D22 -D12 | 1/(D11D22-D12D21)


| -D21 +D11 |

The solution for S1 (first row) can be written:

D22R1/(D11D22-D12D21) - D12R2/(D11D22-D12D21)

As a further simplification, assume that there is no transport between S1 and R2(D12 = 0), and then the result is the
trivial solution that the emission value is just the measured concentration divided by the dilution factor:

S1 = R1/D11

The matrix solution has three possibilities. The most common one is that there are too many sources and too few
receptors which results in multiple solutions requiring singular value decomposition methods to obtain a solution. The
opposite problem is that there are too many receptors and too few unknowns hence an over determined system requiring
regression methods to reduce the number of equations. Unfortunately, possibilities for a matrix solution may be limited
at times due to various singularities, such as columns with no contribution to any receptor or measured values that have
no contribution from any source. The solution to these problems is not always entirely numerical as the underlying
modeling or even the measurements can contain errors. Note that large dilution factors (very small predicted
concentrations) at locations with large measured values will lead to large emissions to enable the model prediction to
match those measurements. The opposite problem also exists in that negative emission values may be required to
balance high predictions with small measurements. The solution to the coefficient matrix is driven by errors, either in
the measurements, the dispersion model, or the meteorological data.

Step 1: defines the binary input files which are the output file from the dispersion simulations configured to produce
output at an interval that can be matched to the measured sampling data. Ideally the model simulation emission rate
should be set to a unit value. Each simulation should represent a different discrete emission period. For example, a four
day event might be divided into 16 distinct 6-hour duration emission periods. Therefore the matrix would consist of 16
sources (columns) and as many rows as there are sampling data. The entry field in step 1 represents the wild card string
*entry* that will be matched to the files in the working directory. The file names will be written into a file called
INFILE. This file should be edited to remove any unwanted entries.

Step 2: defines the measured data input file which is an ASCII text file in the DATEM format. The first record
provides some general information, the second record identifies all the variables that then follow on all subsequent
records. There is one data record for each sample collected. All times are given in UTC. This file defines the receptor
data vector for the matrix solution. It may be necessary to edit this file to remove sampling data that are not required or
edit the simulation that produces the coefficient matrix to insure that each receptor is within the modeling domain.

Step 3: defines the file conversion details from sampling units to the emission units. For instance, the default value of
10+12 converts the emission rate units pg/hr to g/hr if the sampling data are measured in pg/m3 (pico = 10-12). The
exponent is +12 rather than -12 because it is applied in the denominator. The height and species fields are the index
numbers to extract from the concentration file if multiple species and levels were defined in the simulation. The half life
(in days) is only required when dealing with radioactive pollutants and the measured data need to be decay corrected
back to the simulation start time.

Step 4: creates the comma delimited input file called c2array.csv with the dilution factors in a column for each source
and where each row represents a specific receptor location. The last column is the measured value for that receptor. The
column title represents the start time of the emission in days from the year 1900. This step calls the program c2array
which reads each of the measured data values and matches them to the input files to extract the dilution factors from
each source to that measured value.

Step 5: solves for the source vector using the SVD (Singular Value Decomposition) methods from Numerical Recipes,
The Art of Scientific Computing, Press, W.H., Flannery, B.P., Teukolsky, S.A., Vetterling, W.T., 1986, Cambridge
University Press, 818 p. The solution may be controlled to some extent by defining a threshold zero dilution factor,
below which the dilution factors are set to zero. An alternative approach is the define a cumulative percentage dilution
factor, below which an entire sampling row may be eliminated. For instance, if the % delete field contains 10, then the
rows representing the lowest 10% of the dilution factors are removed. In the event the computation fails to produce an
adequate solution, it may be necessary to edit c2array.csv. The solution results are written to an output file source.txt
and also displayed through the GUI. A solution may contain negative values as well as extreme positive emission
results. Such values are not realistic and are a result of model errors or other uncertainties. This step calls the program
tcsolve.

Table of Contents
Concentration / Utilities / Transfer Coefficient / Cost Function
This menu can be used to solve the Transfer Coefficient Matrix (TCM) for the source term vector given a measured data
vector and where the matrix values are the dilution factors for each source-receptor pair. The measured data vector at
multiple receptor locations and/or times is required and it must be defined in the DATEM format. The format of this file
is discussed more detail in the GeoLocation menu. If a time-varying source solution is required, the Run Daily menu can
be used to generate the required dispersion simulations.

Technical details regarding the computational approach used to solve the TCM can be found in "Source term estimation
using air concentration measurements and a Lagrangian dispersion model–Experiments with pseudo and real cesium-
137 observations from the Fukushima nuclear accident",T. Chai, R. Draxler, A. Stein, Atmospheric Environment, 106,
241-251.

The inverse modeling executable lbfgsb input files:

Parameters_in.dat : control parameters


CSV_IN : TCM file
APRIORI : file name of first guess source terms if available

The inverse modeling executable lbfgsb output files:

SOURCE_OUT_000 : release results


CONC_OUT_000 : concentrations generated with 'out.dat' release
Iterate_000 : minimization progress
RPT_OUT_000 : additional run time output

where the 000 represents the process ID, which is always zero for user generated applications.

Step 1: defines the binary input files which are the output file from the dispersion simulations configured to produce
output at an interval that can be matched to the measured sampling data. Ideally the model simulation emission rate
should be set to a unit value. Each simulation should represent a different discrete emission period. For example, a four
day event might be divided into 16 distinct 6-hour duration emission periods. Therefore the matrix would consist of 16
sources (columns) and as many rows as there are sampling data. The entry field in step 1 represents the wild card string
*entry* that will be matched to the files in the working directory. The file names will be written into a file called
INFILE. This file should be edited to remove any unwanted entries.

Step 2: defines the measured data input file which is an ASCII text file in the DATEM format. The first record
provides some general information, the second record identifies all the variables that then follow on all subsequent
records. There is one data record for each sample collected. All times are given in UTC. This file defines the receptor
data vector for the matrix solution. It may be necessary to edit this file to remove sampling data that are not required or
edit the simulation that produces the coefficient matrix to insure that each receptor is within the modeling domain.

Step 3: defines the file conversion details from sampling units to the emission units. For instance, the default value of
10+12 converts the emission rate units pg/hr to g/hr if the sampling data are measured in pg/m3 (pico = 10-12). The
exponent is +12 rather than -12 because it is applied to the model results in the denominator
(Emission=Measured/Model_TCM). The height and species fields are the index numbers to extract from the
concentration file if multiple species and levels were defined in the simulation. The half life (in days) is only required
when dealing with radioactive pollutants and the measured data need to be decay corrected back to the simulation start
time.

Step 4: creates the comma delimited input file called c2array.csv with the dilution factors in a column for each source
and where each row represents a specific receptor location. The last column is the measured value for that receptor. The
column title represents the start time of the emission in days from the year 1900. This step calls the program c2array
which reads each of the measured data values and matches them to the input files to extract the dilution factors from
each source to that measured value. This step also creates an output file called c2array.txt which contains the number of
rows and columns in the matrix. This information is needed when creating the Parameters_in.dat input file created by
Step 5.

Step 5: creates the PARAMETER_IN_000 input file used by the inverse modeling executable lbfgsb. Detailed
information is required that may not always be well known and several solution iterations may be required before the
optimal input parameters have been properly defined. The default settings almost certainly will have to be changed.

The first guess value represents an estimate of the source term. If time-varying information is known, it can be
specified by a negative value in this field and a file name defined in the APRIORI variable.
The scaling factor is used to reduce the numeric range of both the TCM values and the emission solution. A
smaller range improves the solution convergence.
The first-guess uncertainty needs to be defined as the sum of a fraction and constant value.
The uncertainty should also be defined for the measurements by defining a fraction and fixed value.
The solution may be bounded or unbounded.
A logarithmic transformation can be applied to the solution or TCM results prior to computing a solution.

Step 6 runs inverse modeling executable with options defined in Step 5. Different solutions can be tested by
sequentially repeating steps 5 and 6. The solution results from SOURCE_OUT_000 are copied to the output file
source.txt defined in this step and also displayed by the GUI. The Parameters_in.dat can also be edited manually to set
parameters not defined in the GUI. In this case only repeating step 6 is required. A solution may contain negative values
as well as extreme positive emission results. Such values are not realistic and are a result of model errors or other
uncertainties.

Fukushima Example Inputs assuming emissions output units mBq

bckg_const=1e17
LN_X=.false.
LN_Y=.true.
lbfgs_nbd=1
X_Scaling=1d12
Unc_o_f=1d-1
Unc_o_a=3d-0
Unc_b_f=1d3
Unc_b_a=1d14

PARAMETER_IN_000 detailed description

1. ================ DIMENSIONS ================


N_ctrl: Number of unknown source terms. If the source terms are 2-dimensional, Nx_ctrl and Ny_ctrl are
their ranges. Note that N_ctrl = Nx_ctrl * Ny_ctrl

2. ================ TCM_INPUTS ================


CSV_IN: TCM file name (in csv format)
N_obs: Number of observations
CSV_IN file has (N_obs + 1) lines. The first line has observations time for all source terms (N_ctrl
columns). From line 2, each line has "N_ctrl + 1" columns. Observations are listed as the last column.

3. ================ RUN_CONTRL ================


bckg_const: constant first guess, negative values prompt code to read from file
APRIORI: File name of the a priori source terms
LN_X/LN_Y: Switches for using log or original variables for control/metric variables
4. ================ SMOOTH_PNT ================
Smoother: Switch for smoothness penalty term
c_smooth: constant to control the source(t) smoothness, trial and error is needed to decide on the magnitude

5. ================ SMOOTH_P2D ================


Switch and parameters to control smoothness of 2-dimensional sources

6. ================ MODEL_UNC ================


UNC_Model: Under development. Should be turned off at the moment.
T_model_unc: Time scale of the model uncertainty growth. Still under development. Code section is not
available.
Floor_x: Lower bounds of control
Ceiling_x: Upper bounds of control
Floor_y: When LN_Y is turned on, Floor_y is enforced to avoid infinity

7. ================ LBFGS_CTRL ================


Max_iterations: Maximum number of function evaluations before termination. Refer to the lbfgsb source
code for a description of the other parameters.

8. ================ BOUNDS_ARR ================


lbfgs_nbd is an array to indicate the type of bounds imposed on the control variables, and must be specified
as follows:
0 : if x(i) is unbounded
1 : if x(i) has only a lower bound
2 : if x(i) has both lower and upper bounds
3 : if x(i) has only an upper bound

9. ================ UNCERTANTY ================


Uncertainties of the observations and a priori (first guess) are written into two parts, one proportional to the
variable itself (fractional part) and the other independent of the values of the original variable (additive
part). In addition to the measurement uncertainties, observational uncertainties should include the
representative uncertainties. X_scaling is useful when either the source or the observations are too small or
too large with current units.
xoptim=emission rate /X_Scaling
[emission rate variance = xoptim variance * X_Scaling**2]
Observation variance = ( obs*Unc_o_f + Unc_o_a)**2
a_priori source variance = (xoptim*Unc_b_f + Unc_b_a)**2

Table of Contents
Concentration / Special Runs
Special simulations may require a different executable file, modifications to the Control file that are not supported by
the GUI, or interactions with other items under the Advanced Menu tab. More information is provided below for each
special simulation. Special model configurations may not be available for all operating systems. Note that most of the
special simulations may be run using a single processor system or multiprocessor system (supporting MPI).

Test Inputs

This menu calls a program called HYSPTEST, which is a simplified version of HYSPLIT that will read the various input
files such as CONTROL, SETUP.CFG, EMITIMES, etc., and determine if many of the user options are correctly or
optimally configured. The program opens all the meteorological data files, releasing, transporting, and dispersing
particles in the same manner as a regular simulation, but only one particle per time step is processed. No output files are
created except MESSAGE and WARNING files. Note that the non-standard conditional compilation HYSPLIT versions
are not supported by this testing framework. Both trajectory and dispersion input files can be read, however only limited
testing is conducted with a trajectory calculation. Standard analysis messages are written to MESSAGE_mod. When
CONTROL file or SETUP.CFG file changes are suggested, these changes will be summarized in the WARNING_mod
file. The modified (or unmodified) input files are written to CONTROL_mod and SETUP_mod.CFG. Use the GUI
COPY button to copy the changes to CONTROL and SETUP.CFG prior to running the model. The GUI variables
remain unmodified. The suggested changes can be loaded back into the GUI variables by retrieving CONTROL_mod
and SETUP_mod.CFG into their respective menus.

Summarized below are some of the model options that are tested:

ICHEM conflicts related to deposition options and pollutant definitions


Invalid mixing depth computation options
Checks for a defined deposition grid when deposition is turned on
Tests for precipitation or relative humidity fields when wet removal turned on
Checks for TKE or variance in the meteorological data file when turbulence selected
Compare starting location and concentration grid heights when MSL flag set TRUE
Estimate number of particles actually released compared with MAXPAR array allocation
Evaluate random method setting with the actual number of particles released
Compute spatial particle density at maximum particle age to determine optimal NUMPAR

Daily

This menu will execute a special script to run the last configured dispersion calculation for multiple starting times,
creating an output file for each simulation. More information is available from the daily help menu.

Matrix

Although the setup of the concentration matrix calculation is similar to that of the trajectory matrix calculation, there is
an additional option that can be set in the Advanced Menu configuration tab that changes the nature of the concentration
output file to produce a source-receptor matrix. This will be discussed further below. The matrix calculation is a way to
set up the CONTROL file for multiple starting locations that may exceed the GUI limit under the Concentration Setup
menu tab. Hundreds or thousands of starting points may be specified. The Run Matrix menu tab first runs a program that
reads the CONTROL file with three starting locations and then rewrites the same CONTROL file with multiple locations.
The multiple locations are computed from the number of starting points that fall in the domain between starting point 1
and starting point 2, where each new location is offset is the same as that between starting locations 1 and 3. For
instance, if the original control file has three starting locations: #1 at 40N, 90W; #2 at 50N, 80W, and #3 at 41N, 89W;
then the matrix processing results in a Control file with 121 starting locations, all one degree apart, all between 40N
90W and 50N 80W.
In the normal model execution mode, the concentration contributions from multiple sources are summed on the
concentration grid, hence it is not possible to determine the fraction of the material comes from each source location.
This can be seen in the illustration below using the above configuration for the first 12 hours of the sample case.

However, if the "Matrix" conversion module is checked in the Advanced Concentration Configuration menu tab, then
the multiple source simulation maintains the identity of each source in the concentration output file. The Display Matrix
menu tab permits extraction of information for individual sources or receptors from this special concentration output
file. The results of the same simulation are shown in the illustration below. In this case the receptor check-box was set
and the receptor location was identified as 45.0,-75.0 with normalization. Therefore the graphic illustrates the fractional
contribution of each region to the first 12 hour average concentration at the designated receptor.
The following table illustrates the HYSPLIT matrix configuration. Emissions occur from each of N source locations and
the receptors represent the concentration grid of M nodes. A single concentration output file is produced where each
source contributes to its own concentration grid of M receptors. When selecting a "source" display, the M columns from
the source location (row) represent the downwind concentration pattern for that source. When the receptor location
(column) is selected, the contours represent the concentrations (each row of that column) contributing to that receptor
from each of the source locations. Source and receptor grids should be of comparable resolution.

There are two other more quantitative approaches to source-attribution available through the menu and both require the
measured sampling data. In the first approach, menu system is used to configure the model to execute a script to run
multiple iterations of the upwind dispersion calculation for periods that correspond with individual measured sampling
data. The results are then overlaid to determine the most likely source region. In the second approach, the menu system
is used to solve the source-receptor coefficient matrix for the source term vector given a measured data vector and
where the matrix values are the dilution factors for each source-receptor pair as described above.

Geolocation

In a manner similar to the daily menu, the dispersion model can be run for multiple simulations in time but where each
simulation is configured as an adjoint calculation (backward in time) from measured concentration data points. More
information on this procedure can be found in the geolocation help menu.
Ensemble-Meteorology

The ensemble form of the model, an independent executable, is similar to the trajectory version of the ensemble. The
meteorological grid is offset in either X, Y, and Z for each member of the ensemble. The model automatically starts
each member on a single processor in a multi-processor environment or cycles through the simulations on one
processor. The calculation offset for each member of the ensemble is determined by the grid factor as defined in the
Advanced Concentration Configuration Tab. The default offset is one meteorological grid point in the horizontal and
0.01 sigma units in the vertical. The result is 27 ensemble members for all offsets. The normal Setup Menu tab is used to
configure the CONTROL file. Note that if fewer than 27 processors are available, the ensemble configuration menu
permits starting the calculation at any ensemble member number within the valid range. Because the ensemble
calculation offsets the starting point, it is suggested that for ground-level sources, the starting point height should be at
least 0.01 sigma (about 250 m) above ground. The model simulation will result in 27 concentration output files named
according the file name setting in the control file "{cdump}.{001 to 027}" with a suffix equivalent to the ensemble
member number. On a single processor system, the calculation may take some time to cycle through all the members.
The menu will be locked until the simulation has completed. A message file window will open after termination.
Computational progress may be monitored by noting the generation of new concentration output and message files with
the ensemble number suffix in the /working directory. The concentration output from each member can be displayed
through the concentration display menu tab. However, to display the probabilities associated with the multiple
simulations, it is necessary to pre-process the data through the Display Ensemble menu tab. Using the default
configurations for the sample simulation, the illustration below represents the 90th percentile concentrations aggregating
all four output time periods. For instance the blue contour in this 90th percentile plot represents the region in which only
10% of the ensemble members have air concentrations greater than 10-15. If the meteorological ensemble is run through
a script or batch file instead of the GUI, the executable, with a command line parameter of the member number, must be
run once for each of the 27 members.

Ensemble-Turbulence

Another ensemble variation is the turbulence option, which also creates 27 ensemble members, but due to variations in
turbulence rather than variations due to gradients in the gridded meteorological input data. The variance ensemble
should only be run in the 3D particle mode and with fewer particles, in proportion to the number of ensemble members
to the number of particles required for a single simulation. For instance, if 27,000 particles are required to obtain a
smooth plume representation, then each member should be run for 1000 particles. Normally the same random number
seed is used when computing the turbulent component of the particle motion. However, in the variance ensemble, the
seed is different for each member, resulting in each member representing one realization of the ensemble.

The variance ensemble can also be used to determine the number of particles required for a simulation, by progressively
increasing the particle number until the variance decrease with increasing particle number is no longer significant.
Because the number of particles required for a simulation increases with increasing distance from the source, a typical
downwind receptor location needs to be selected and then by using the box plot display option, the concentration
variability (max-min range) can be estimated and then determining when the decrease in range is no longer cost
effective (in terms of computational time) with increasing particle number.

Ensemble-Physics

The physics ensemble is created by running a script that varies in turn the value of one namelist parameters from its
default value. These are the parameters defined in the file SETUP.CFG and normally set in the Advanced Concentration
Configuration menu tab. If the namelist parameters have not been defined through the menu, then the default values are
assigned. In this first iteration, the GUI menu permits no deviation from the values assigned by the script. The entry box
is for information purposes only to show the progress of the computation.

A summary of the current 15 ensemble variations is also written to the file ensemble.txt showing the name of the
concentration output file and member variation. Check the advanced menu help files for more information on each
variable.

cdump.001 : idsp = 2        Mass correcting dispersion calculation


cdump.002 : kmixd = 1     Mixed layer depth from temperature profile
cdump.003 : kmixd = 3     Mixed layer depth from modified Richardson number
cdump.004 : kmix0 = 50 Minimum mixed layer depth set to 50 m instead of 150 m
cdump.005 : kxmix = 1     Averaged vertical mixing in the PBL
cdump.006 : kdef = 1       Use horizontal velocity deformation for mixing
cdump.007 : kbls = 2       Determine stability from temperature profile
cdump.008 : kblt = 1        Beljaars equations for turbulence
cdump.009 : kblt = 3        Use TKE for turbulence
cdump.010 : kblt = 5        Hanna equations for turbulence
cdump.011 : vscales = 200 Fixed stable vertical Lagrangian time scale
cdump.012 : vscales = -1     Variable Lagrangian time scale stable and unstable
cdump.013 : kblt = 1 & vscales = -1    Beljaars with variable time scale
cdump.014 : kblt = 3 & vscales = -1    TKE with variable time scale
cdump.015 : kblt = 5 & vscales = -1    Hanna with variable time scale

Ensemble reduction based on minimization of square error


Over the last few years, the use of dispersion model ensembles has been an increasingly attractive approach to study
atmospheric transport in the lower troposphere. Ensembles are constructed either by combining multiple numerical
weather prediction simulations, different dispersion models, introducing variations in a particular model’s physics
parameterizations or different combinations of these variations. The determination of the optimum number of multi-
model members and/or individual model physical features to vary is the primary difficulty to overcome when
constructing these dispersion simulation ensembles. In many studies the ensemble members just consisted of the
available model outputs from different research groups regardless of the model characteristics or the result of
performing an arbitrary number of runs with different configurations by individual researchers. Both approaches
increase the possibility of redundancy which means that many of the ensemble members may not be much different
from each other. In general, any ensemble might contain redundant information that overemphasizes certain transport
and dispersion features that can be inaccurate. For example, perhaps a sub-group of members all use the same
meteorological data, which might not be as accurate as another meteorological data set that is used by fewer members.
Therefore, simply due to a dependency among many of the members to the same data, the ensemble including these
members would be less accurate than one constructed from an ensemble based upon only the independent members.
Conversely, independence among ensemble members does not necessarily imply that the reduced group of runs will be
more accurate than the full ensemble because, by chance, the later could be also over emphasizing redundant members
that happen to be more accurate. Consequently, we can apply reduction techniques with the intent to produce more
accurate results than those obtained with the full ensemble and therefore requiring fewer computing resources. Solazzo
and Galmarini (2014) demonstrated that an ensemble can be reduced by optimizing the skills of the mean taken among
all the possible subsets of ensemble members. Following this methodology we calculate the average of all the possible
model combinations composed by an increasing number of sub ensemble members up to the total number of members
of the full ensemble and estimate their MSE. Furthermore, if M is the total number of ensemble members and n is the
number of sub ensemble members, then the number of possible combinations is given by M!/(n!*(M-n)!). In other
words, if our ensemble has, say, 24 members we will combine them in 276 pairs, 2024 trios, 10626 quartets, etc. and
determine which combination provides the minimum MSE. The reduction technique is applied by running a
postprocessing program that reads the different ensemble member outputs along with the measured data (all in DATEM
format) and calculates the minimum square errors for each possible model output combination.

E. Solazzo, S. Galmarini, 2014. The Fukushima-137Cs deposition case study: properties of the multi-model ensemble,
Journal of Environmental Radioactivity, Available online 22 March 2014, ISSN 0265-931X,
http://dx.doi.org/10.1016/j.jenvrad.2014.02.017

Global
The global simulation is a HYSPLIT "grid-in-plume" option, in which the Lagrangian particle mass can be transferred
to a global Eulerian model after a designated number of hours. Lagrangian and Eulerian dispersion and transport can
occur simultaneously. Each model has its own output grid. The advantage of this approach is that at times very long-
range calculations may require too many particles to properly represent the pollutant distribution. In this way it is
possible to take advantage of the the more precise Lagrangian approach near the source and the more computationally
efficient Eulerian computation at the hemispheric and global scales. More information on how to configure this
simulation can be found in the global model help menu.

Dust Storms

A model for the emission of PM10 dust has been constructed (Draxler, R.R, Gillette, D.A., Kirkpatrick, J.S., Heller, J.,
2001, Estimating PM10 Air Concentrations from Dust Storms in Iraq, Kuwait, and Saudi Arabia, Atmospheric
Environment, Vol. 35, 4315-4330) using the concept of a threshold friction velocity which is dependent on surface
roughness. Surface roughness was correlated with geomorphology or soil properties and a dust emission rate is
computed where the local wind velocity exceeds the threshold velocity for the soil characteristics of that emission cell.
A pre-processing program was developed that accesses the HYSPLIT land-use file over any selected domain and
modify the input CONTROL file such that each emission point entry corresponds with a "desert" (active sand sheet)
land-use grid cell. The original PM10 flux equation was replaced by a more generic relationship (Westphal, D.L., Toon,
O.B., Carlson, T.N., 1987. A two-dimensional numerical investigation of the dynamics and microphysics of Saharan
dust storms. J. Geophys. Res., 92, 3027-3029).

The dust storm simulation is configured in the same way as the matrix calculation in that it is necessary to define three
source locations, the first two representing the limits of the domain, and the third defining the emission grid resolution.
The pre-processor then finds all emission points within that domain that have a desert category and modify the
CONTROL file accordingly. The dust box must be checked in the advanced configuration menu to compute the PM10
emission rate. As an example, we can configure the model to run the large Mongolian dust storm of April 2001. An
animation of the calculation results can be downloaded. To run the same simulation it will be necessary to obtain the
first two weeks of northern-hemisphere meteorological analysis data (FNL.NH.APR01.001). A pre-configured
CONTROL file (dust_conc) should be retrieved from the working directory. The CONTROL file defines the emission
domain by the three starting locations: 35N-90W to 50N-120W with the grid increment to 36N-91W. There is no point
in defining an emission grid of less than one-degree resolution because the resolution of the land-use data file is one-
degree and the meteorological data is closer to two-degrees. Once the model is setup for the simulation, including the
dust check-box in the configuration menu, execute the model from the Special Simulations / Run Dust Storm menu tab.
Not available through the GUI, but another option in the SETUP.CFG namelist file, is the emission threshold sensitivity
factor, which normally defaults to one. For instance, adding the line P10F=0.5 to the namelist file, would cause dust
emissions to occur at half the normal threshold velocities. Starting the model will cause the window shown below will
open to indicate the revision of the CONTROL file.
The message indicates that the initial 3-location CONTROL file was reconfigured by the dustbdy program for 105
source locations. That means in the domain specified, 105 one-degree latitude-longitude grid cells were found to have a
desert land-use category. If none are found, then the CONTROL file is deleted to prevent model execution. Click on Yes
or No to continue - yes just deletes the window. The model execution will then start. PM10 pollutant dust particles are
only emitted from those 105 cells where the wind speed exceeds the emission threshold. Therefore it is possible to have
simulations with no emissions. An example of the output after 24 hours simulation time is shown in the illustration
below.
The concentrations represent a 3-hour average from 21 to 24 hours after the start of the simulations. It is not possible to
say exactly from when or where particles are emitted except to note that the 105 potential source locations are shown.
The emission algorithm emits in units of grams, but the in configuring the concentration display, the units were changed
to ug by defining a conversion factor of 106. Maximum concentrations are on the-order-of 100, but the layer was
defined with a depth of 3 km to facilitate comparison with satellite observations. The simulation could be rerun with a
smaller layer to obtain concentrations more representative of near-ground-level exposures.

Daughter Products

A nuclide daughter product module has been incorporated into HYSPLIT. Given a chain of decay from a parent nuclide,
the model can calculate the additional radiological activity due to in-growth of daughter products using the Bateman
equations. Information about the available daughter products available in the model can be found in ../auxiliary/ICRP-
07.NDX. More information on how to configure this simulation can be found in the daughter product help menu.

Table of Contents
Concentration / Special Runs / Daily
This special menu is used to execute multiple automated concentration simulations for days, weeks, or months. The
script will start the first simulation at the time set in the CONTROL file and then generate new simulations based upon
the values shown in the menu. In the example shown below, the simulation starts at 95 10 16 00. Each simulation has a
duration of 12 hours (only shown in the SETUP menu). A new simulation will be started every 6 hours until one day
after the start time. The model output files are named according to the base name set in the control file but are appended
either with a six digit month, day and hour field or a sequential number. This approach differs from a single simulation
with multiple starting times in that each simulation is independent and a new output file is created for each run.

The model should be configured and run for the first simulation time through the standard run menu to insure that the
simulation is configured correctly. If part of the simulation requires meteorological data from the previous month or the
next month, these should be included in the base simulation test.

After pressing the Execute Script the start of each simulation is noted in the log until the script completes. Note that
concentration simulations are much slower than trajectories and it may take a while before the log is updated. In this
example, four simulations were completed.
A variation of the basic simulation that reduces the run duration by the new run increment time is configured when the
Shorten each new run duration checkbutton is enabled. This procedure is the first step in creating the dispersion files
needed to generate a Transfer Coefficient Matrix. See the help sections on SVD and Cost Function solutions of the
TCM. In the example shown, setting the run reduction duration checkbutton results in only two completed simulations.
The first one has a duration of 12 hours while the second one is 6 hours. The subsequent simulation would have a
duration of zero hours. The concept behind the run duration reduction is that the output from all dispersion simulations
should have the same end time. In the example shown here, the initial run duration should have been 24 hours, not 12,
and then all four simulation would have completed and the last simulation, the one starting at 18 UTC would have had a
duration of 6 hours and ended at 00 UTC the next day. This issue must be corrected in the SETUP menu where the base
run duration needs to be changed from 12 to 24 hours.

Table of Contents
Concentration / Special Runs / GeoLocation
Summary: This menu is used to configure the model to execute a script to run multiple iterations of the upwind
dispersion calculation for periods that correspond with individual measured sampling data. The model results are then
overlaid to indicate the most probable source regions. The CONTROL file should have been previously defined for a
forward calculation that corresponds to the sampling period of the measured data. The measured data file must be in the
DATEM format. The output is written to the source.html HTML file in the working directory.

Step 1: defines the measured data input used in this series of calculations. The dispersion model is run in its backward
mode, from each of the sampling locations, with a particle mass proportional to the measured concentration and with the
particles released over a period corresponding to the sample collection period. Sampling data files are in the ASCII text
DATEM format. The first record provides some general information, the second record identifies all the variables that
then follow on all subsequent records. There is one data record for each sample collected. All times are given in UTC.
The DATEM sampling data records have the following format:

INTEGER - Four digit year for sample start


INTEGER - Two digit month for sample start
INTEGER - Two digit day for sample start
INTEGER - Four digit hour-minutes for sample start
INTEGER - Four digit hour-minutes for duration of the sample
REAL     - Latitude of the sampling point (Positive is north)
REAL     - Longitude of the sampling point (Positive is east)
REAL     - Air concentration in mass units per cubic meter
INTEGER - Site identification

An sampling data file which is used in the following calculations can be found in examples\matrix\measured.txt. These
synthetic measurements (units = picograms per cubic meter) were created from a model simulation using the sample
meteorological data in the working directory for a hypothetical 6-minute (0.1 hr) duration release of 10 kg of material
from 40N 80W starting at 1200 UTC 16 Oct 1995.

Step 2: creates one CONTROL.{xxx} file for each sample (data record) in the previously defined measured data file.
The CONTROL files are numbered sequentially from 001 to a maximum of 999, the current limitation of the program
used to overlay the simulation results. The individual simulation control files are created from the current configuration
shown in the concentration setup menu. The examples\matrix\control_geo file and the examples\matrix\setup_geo file
templates should be retrieved into the setup and configuration menus. The template is a configuration for a forward
simulation that encompasses the entire sampling period, with output intervals that correspond with the sampling
intervals of the measured data. Essentially a configuration that could be used to predict the concentrations at the
measurement locations if you knew the actual location and amount of the release. This step calls the pre-processor
program (dat2cntl) which uses the configuration as a template to design each individual simulation CONTROL file.
Each of these CONTROL files is configured as a backward simulation for the entire computational period, with the
particle release occurring over the time of the sample collection. This insures that each simulation output file will
contain an identical number of output periods, regardless of the time of the particle release. The template control file
should have all key parameters specified such as the starting time and the center of the concentration grid. Do not use
the zero default options in the CONTROL file, explicitly set all variables. All simulations must be identical.

There are three solution options. The default option Numerator, discussed above, uses the measured concentration in the
numerator as the emission rate, resulting in a source sensitivity map weighted by the measured data. Checking the
Inverse box sets the source term as the inverse of the measured concentration. This modeling scenario is comparable to
the S = R/D situation described in the matrix solution help file>. In this case the model is computing D/R, where D is
the dilution factor computed by the model and R is the measured concentration. The source term for the calculation is
set to 1/R and only measurements where R is greater than or equal to zero are considered. Therefore, the resulting
output (D/R) is an estimate of the inverse of the source term (1/Q) required to match the measured value for that
simulation. A unit conversion factor needs to be set to output the appropriate mass units. The last option is to set the
Constant radiobutton which results in the emission rate equal to the value set in the constant conversion factor entry
box. In this type of simulation, each sample gets equal weight and the model results may be used to determine the
optimal emission rate required to match the measured data.

Step 3: sequentially runs the dispersion simulations starting with CONTROL.001 through the last available control file.
Each simulation uses the same namelist configuration shown in the menu. Note that a simulation is run for each
measurement, high values as well as zero measurements. Non-zero measurements result in an hourly emission rate equal
to the measurement value, while zero measurements are set to a very small, but non-zero value. In the context of this
particular calculation, the intent of the source-attribution is primarily to determine the source location and perhaps its
timing rather than estimating the emission rate from the measurements. Determination of emission rates should be done
through the matrix menu option. The measurement data are only used to weight the source-sensitivity results for each
simulation. Depending upon the model setup and configuration, simulation wall-clock times may vary considerably.
Each simulation output results in a binary concentration file and message file with the same run number suffix as the
control file.

Step 4: shows the multiple simulation results by averaging the source sensitivity function at each grid point over all the
simulations. The dispersion model result of the upwind (backward) calculation looks similar to the air concentration
field of the downwind (forward) calculation, but represents not concentration, but the source regions that may contribute
to the air concentration at the measurement location from which the upwind calculation was started. There are two
optional parameters that influence the output graphic. The time aggregate default is one, meaning that each sampling
period is represented by one graphic. In the example calculation shown below, the source sensitivity function is shown
for the last time period of the simulation and represents the average of all the simulations (zero and non-zero) from
different time periods valid for that 6-hour sampling period. A time aggregation value of 5 would average the results
from all 5 time periods into one graphic. The zero threshold value can be used eliminate the very low level contours that
result from the zero-emission simulations. For instance, selecting a value of 1.0E-15 (1/1000 of a pg/m3) would set to
zero any grid points less than that value.

The comparable graphic for the inverse calculation is shown below for the mean emission values for 15 simulations that
had non-zero measurements. Before creating control files for the inverse simulation, previous control files should be
deleted to avoid mixing together the two types of simulations. For this example, the measured data file has 15 non-zero
measurements and 30 zero measurements and the units conversion factor should be set to 1.0E-15 to go from pg to kg.
The resulting interpretation of the graphic is that the central contour (value = 1) indicates an average emission of 1 kg in
that region. The outermost contour (0.01) would require 100 kg to be released to match the measured data. Greater
dilution (D is smaller) require greater corresponding emissions to match the measured data.
Table of Contents
Concentration / Special Runs / Global
The Global Eulerian Model (GEM) is invoked as a series of subroutines within the main HYSPLIT transport and
dispersion code. Particles or puffs are always first released in the Lagrangian framework and carried within HYSPLIT
until they exceed a certain age at which point their mass is transferred to the GEM routines. Particles can be released
and transferred for the entire duration of the simulation. The only requirement is that at one pressure-level global
meteorological grid (2.5 degree reanalysis or 1 degree GFS) needs to be defined for the HYSPLIT calculation.
Additional finer resolution regional meteorological grids can be defined for the Lagrangian portion of the transport
calculation. Details of the global model have been previously published: Draxler, R.R., 2007, Demonstration of a global
modeling methodology to determine the relative importance of local and long-distance sources, Atmospheric
Environment, 41:776-789, doi:10.1016/j.atmosenv.2006.08.052.

General Instructions

Although the HYSPLIT-GEM version has its own executable, without the required namelist file GEMPARM.CFG the
calculation will proceed as a standard Lagrangian HYSPLIT simulation. In this situation, the model will write the
default file gemparm.bak as a reminder that a GEM configuration file is required to invoke the global subroutines. This
file can then be edited and renamed if not using the GUI interface. GEM and HYSPLIT each create their own
concentration output files and if the total concentration is required, the results from the two calculations must be added
together. The program concadd is included in the distribution and can be accessed from the global display menu by
selecting the plume add checkbox. Furthermore, the GEM concentration grid is always a snapshot concentration. There
is no averaging option. In general, the model simulation is configured like any other HYSPLIT simulation, through the
CONTROL and SETUP.CFG files. Additional GEM specific options are set in the GEMPARM.CFG file. Not all GEM
options are available through the GUI and can only be modified by editing the file directly.

SETUP.CFG Namelist Configuration


There is only one parameter in the HYSPLIT namelist that is required for the GEM subroutines:

1. Set GEMAGE (default=48 hours) for the time a particle or puff is transferred to global model. Note that when
GEMAGE<0, the particles switch to the GEM grid when the particle meteorology grid switches to global
meteorological grid or when the particle age=|gemage|.
2. When using an EMITIMES file, QCYCLE is not used for emission cycling. Also if the EMITIMES file has fewer
sources than defined in the CONTROL file, the emissions specified in the CONTROL file need to be set correctly
and match the first value in EMITIMES.

GEMPARM.CFG GUI Namelist Options

Only a subset of the namelist options are available to be set through the GUI. Note that concentration (mass/m3) output
will always be written to gemconc.bin as individual or averaged layers, but the integrated output (mass/m2) will only be
written to gemzint.bin. The bottom and top values are given as index numbers where 1 is the bottom layer. Layer
thickness corresponds with the input meteorological data and cannot be modified. The GUI interface is shown below:

GEMPARM.CFG Namelist Description

HMIX=200000.0 - horizontal mixing (m2/s) at the equator


VMIX=50.0 - maximum vertical mixing (m2/s)
KMASS=2 - flag set to conserve mass (0:skip 1:show 2:yes)
KINIT=0 - concentration initialization (<0:no_run 0:zero 1:lat_bands 2:dew-point 3:last_dump 4:global_v37)
CKON=0.0 - value of initial concentration field if kinit>4
CMIN=0.0 - minimum value when using regression initialization (kinit=1|2)
CMAX=1.0E+25 - regression fit maximum value
WFACT=1.0 - sensitivity of advection to vertical velocity
KDIVG=0 - flag to compute vertical velocity from the divergence field (kdivg=1)
GEMDUMP='gemdump.bin' - daily 3-dimensional dump of concentration field (for initialization kinit=3)
GEMCONC='gemconc.bin' - HYSPLIT formatted concentration output file
GEMZINT='gemzint.bin' - HYSPLIT formatted vertically integrated concentration
CFACT=1.0 - internal (model) concentration to external (output) concentration units conversion factor
KZBOT=1 - grid cell index for the bottom output (or integration/averaging) level
KZTOP=1 - grid cell index for the top output (or integration/averaging) level
KZAVG=0 - flag for vertical averaging (0:none 1:average{bot-top} 2:integral{bot-top} 3:{2+0} 4:{2+1})
WFREQ=3 - output frequency in hours
IHOUR=0 - initial UTC output hour

Table of Contents
Concentration / Special Runs / Daughter products
Setting up this run consists in the following steps.

Step 1: Define the nuclide parent name (e.g. Cs-137) and run a preprocessor (nuctree) that creates a summary text file
(DAUGHTER.TXT) that contains the daughter nuclides produced by a parent nuclide along with the half-life and
branching fractions. This file is required to be in the working directory when running HYSPLIT for this application. In
addition, the preprocessor will read the CONTROL file and create a new (CONTROL.DAUGHTER) file containing the
daughter information. The names of the species will be given in numbers that correspond to the daughter products
names written in the DAUGHTER.TXT file. Finaly, the GUI will copy the CONTROL.DAUGHTER back to
CONTROL.

Step 2: configure setup (SETUP.CFG) indicating that the daughter products module will be used (ICHEM=11) and set
Maxdim=#daughters obtained from DAUGHTER.TXT

Table of Contents
Concentration / Multi-processor
Special simulations may require a different executable file, modifications to the Control file that are not supported by
the GUI, or interactions with other items under the Advanced Menu tab. More information is provided below for each
special simulation that can be run under a multi-processor environment that supports MPI. These special simulations are
available only for UNIX operating systems. All the MPI simulations execute the special script run_mpi.sh which can be
found in the /exec directory. This script executes the appropriate MPI executable variation of the concentration model
and almost certainly will require some customization to match the local operating system environment.

Run MPI Model

The standard concentration model can be run on multi-processor systems. As pollutant particles are released during the
simulation, they are parsed out in sequence to the available processors. The calculation proceeds independently until the
end of a concentration averaging period. At this point, the concentrations from each processor are summed, and only
one concentration output file is updated. The MPI version can be quite effective in speeding up simulations requiring the
release of many particles. No special configuration or control file is required and output can be viewed using the
standard concentration display menu.

Run Matrix Model

The multiprocessor version of the matrix calculation is configured the same way as for a single processor system. The
MPI calculation proceeds as described above for the standard MPI model simulation.

Run Ensemble Model

The ensemble model automatically starts each member on a single processor in a multi-processor environment. The
multiprocessor version of the ensemble calculation is configured the same way as for a single processor system. Note
that if fewer than 27 processors are available, the ensemble configuration menu permits starting the calculation at any
ensemble member number within the valid range of 001 to 027.

The Model Launch Menu

The model launch menu contains four entries: the number of processors requested for the simualation, the name of the
executable, the Prep-Code, and the working directory. Each of these is assigned a default value depending upon the
calling menu. Normally there should be no reason to change any of these values except the number of processors. If the
working directory is changed and does not already exist, a new directory is created, and the script changes to that
directory before starting the simulation.

The Prep-Code is just an internal flag to set any preprocessor options:

M = matrix
E = ensemble
S = standard

If the Prep-Code field is left blank, the MPI script is not called, and the named executable will be run in background. In
this way multiple jobs can be submitted on a multi-processor system in a non-MPI environment. However, in both MPI
and non-MPI applications, when simultaneous calculations are desired, it is important that each simulation have its own
working directory.

Table of Contents
Concentration / Display / File Format
Concentration packing has been implemented with HYSPLIT version 4.5. The updated format is downward compatible
in that all display programs can read files produced from versions prior to 4.5, but older versions of the display
programs cannot read the new packed output format. Note that HYSPLIT V4.5 can be configured to produced the older
style unpacked concentration files. Concentration file packing does not write the same information in fewer bytes, but
rather writes the same information using twice as many bytes. The packed files are generally smaller because only
concentration values at the non-zero grid points are written to the output file by the model. However this requires the
grid point location to be written with the concentration, hence the additional bytes. If most of the grid is expected to
have non-zero concentrations, then the old style format will save space. The output format of the unformatted binary
(big-endian) concentration file written by dispersion model (hycs_std) and read by all concentration display programs is
as follows:

Record #1

CHAR*4 Meteorological MODEL Identification


INT*4 Meteorological file starting time (YEAR, MONTH, DAY, HOUR, FORECAST-HOUR)
INT*4 NUMBER of starting locations
INT*4 Concentration packing flag (0=no 1=yes)

Record #2 Loop to record: Number of starting locations

INT*4 Release starting time (YEAR, MONTH, DAY, HOUR)


REAL*4 Starting location and height (LATITUDE, LONGITUDE, METERS)
INT*4 Release starting time (MINUTES)

Record #3

INT*4 Number of (LATITUDE-POINTS, LONGITUDE-POINTS)


REAL*4 Grid spacing (DELTA-LATITUDE,DELTA-LONGITUDE)
REAL*4 Grid lower left corner (LATITUDE, LONGITUDE)

Record #4

INT*4 NUMBER of vertical levels in concentration grid


INT*4 HEIGHT of each level (meters above ground)

Record #5

INT*4 NUMBER of different pollutants in grid


CHAR*4 Identification STRING for each pollutant

Record #6 Loop to record: Number of output times

INT*4 Sample start (YEAR MONTH DAY HOUR MINUTE FORECAST)

Record #7 Loop to record: Number of output times

INT*4 Sample stop (YEAR MONTH DAY HOUR MINUTE FORECAST)

Record #8 Loop to record: Number levels, Number of pollutant types

CHAR*4 Pollutant type identification STRING


INT*4 Output LEVEL (meters) of this record
No Packing (all elements)

REAL*4 Concentration output ARRAY

Packing (only non-zero elements)

INT*4 Loop non-zero elements

INT*2 First (I) index value


INT*2 - Second (J) index value
REAL*4 - Concentration at (I,J)

Table of Contents
Advanced / Special Topics / Particle Dump File Format
The concentration configuration menu provides an option to write a model initialization file, which by default is always
named "PARDUMP" (for particle dump). This file can be written at regular intervals during the simulation, a
convenient way to restart a simulation in case of unexpected failure. To restart the model using the PARDUMP file it is
only necessary for the file to be present in the root working directory. If the internal time stamp of the file matches the
start time of the simulation, the model will initialize the particle count from the file before emitting new particles
according to the emission scenario defined in the control file. The format of the PARDUMP file is given below:

Record #1

INT*4 Number of particles


INT*4 Number of pollutants per particle
INT*4 Time of particle dump (YEAR, MONTH, DAY, HOUR, MINUTES)

Record #2 - Loop to record: Number of particles

REAL*4 Particle pollutant mass (times the number of pollutants)


REAL*4 Particle LATITUDE, LONGITUDE, HEIGHT, SIGMA-H,SIGMA-W, SIGMA-V
INT*4 Particle AGE, DISTRIBUTION, POLLUTANT, METEO-GRID,SORT-INDEX

The "Particle" tab of the "Special File Display" menu brings up a Windows based viewer that shows the particle
positions over a map background. The display can be zoomed and otherwise adjusted using the left and right mouse
buttons in conjunction with the shift and cntl keys. Help is provided on the screen with the left and right side comments
corresponding to the respective mouse button. The particle viewer can also be used to overlay satellite images on the
particle positions. More information on this is provided "FTP Satellite Data" help menu. The particle position file may
be converted to a binary concentration file through the command line utility program par2conc.

Number of pollutants per particle is set by MAXDIM. Default is 1.

HEIGHT is in meters agl. The height is in agl even when KMSL=1 is set in SETUP.CFG
Note that there may be some differences between the value written to the pardump file and the actual value when the
terrain height is large. The terrain height is not passed to the pardump reading or writing routines and so the height is
written approximating terrain height as 0m. This results in a small difference which is dependent on the terrain height
and internal scaling height.
Za'=(1-Zt)*ZMDL
Za=(1-Zt)*(ZMDL-ZTER)
Za' = (ZMDL / (ZMDL-ZTER))* Za
Za' is the approximated height written to the pardump file.
Za is the actual height of the particle
ZMDL, internal scaling height. Default is 25km. May be higher if model top is chosen higher. See #6
ZTER is the terrain height.
Zt is the terrain following coordinate.
Example - a height of 100m agl in an area with terrain height tof 1600m would show up as a height of 106.8m in the
pardump file.
This has no impact on the intended functionality which is to restart the model from a pardump file created by the model.
However if you are writing your own pardump file or using the output for other purposes in a high altitude area, you
may
wish to take this into account.

SIGMA-H is the horizontal puff size in meters.

VEL-W is the current value for the turbulent velocity in the vertical in m/s
VEL-V is the current value for the turbulent velocity in the horizontal in m/s

Currently VEL-U is not written to the pardump file and when the model is initialized from a pardump file, it assumed
that vel-u = vel-v.

POLLUTANT is an integer, N, which corresponds to the Nth pollutant species defined in the CONTROL file. Pollutant
definition

METEO-GRID is n integer, N, which corresponds to the Nth meteorological data grid defined in the CONTROL file.

SORT-INDEX Each particle or puff is assigned its own unique sort-index. The sort index is used within HYSPLIT for
looping through all the computational particles or puffs. If a computational particle is removed during the simulation,
the sort indices are reassigned so that there are only as many sort indices as particles. Consequently, the sort index for a
computational particle may change during a simulation. If ichem=8, then particle deletion is turned off, and the sort
index will refer to the same computational particle throughout the simulation. In the case of puffs, the sort indices are
reassigned when puffs are deleted, merged or split.

Table of Contents
Concentration / Display Data / Terrain Overlay
The use of several menu features is demonstrated here that will permit the creation of a map of the concentration plume with political
boundaries and contours of the terrain as defined by the meteorological data. The basic process is to create a shapefile of the terrain and add that
file to a shapefile of the political boundaries. Although this demonstration is for terrain and concentration any line or contour based shapefile
can be displayed through the concentration or trajectory plotting programs.

Open the Meteorology / Display Data / Contour Map menu and select the meteorological data file, the station height field (SHGT=terrain), and
set the GIS checkbox, which will output the terrain field in generate format. It might be necessary to force the contours and the center of the map
to get a nice looking display over the region of interest. In this case for the west coast of the U.S. the maximum terrain height was set to 2500 m
with an interval of 250 meters between contours.

Run the contour program and the display shown below will be created. In the working there will be an additional file called GIS_METEO_01.txt
which is the generate format text file of the terrain contours.
Now open the GIS to Shapefile utility menu and browse for the generate format file name. Select the "Polygons" radiobutton to insure that the
terrain polygons are treated as closed connected lines. After processing the input file, four output files will be created with the base name of
contour. The subsequent plotting programs only need the contour.shp file, but the other files may be needed for import into other GIS
applications.
If you do not have a plume simulation already completed, open the concentration setup menu and run the default test case, but move the starting
location to 40N 125W and run the model for 24 rather than 12 hours. To simplify the output, set the concentration grid averaging time to 24 h
and reduce the grid resolution from 0.05 to 0.10. Then run the model.

After the run completes, go to the hysplit/graphics directory and copy arlmap.shp and shapefiles_arl.txt to the working directory. Rename
shapefiles_arl.txt to shapefiles.txt. Then open notepad in working to edit shapefiles.txt to add the additional file (line) defining the shapefile with
the terrain. For this display, the line is twice as thick as the map background and the color is defined as black (0.0 0.0 0.0).

Now open the concentration / display / contours menu and define the shapefiles.txt in the field for the map background. This will cause the
display program to use the files defined in this file rather than the default map background. Multiple shapefiles can be displayed at the same
time.
Executing the display will result in the following image showing the intersection of the plume with the elevated terrain as it moves to the east.
Table of Contents
Advanced / Special Topics
This section provides some guidance in configuring the model input to for certain specialized calculations. The default
configuration supplied with the test meteorological data is confined to a simple trajectory and inert transport and
dispersion calculation. Some of these more complex scenarios are configured through the Advanced menu Configuration
Setup tab which modifies the "SETUP.CFG" namelist file.

Particle or Puff Releases

The concentration model default simulation assumes 3D particle dispersion (horizontal and vertical). Other options are
set with the INITD parameter of the SETUP.CFG namelist file defined in the advanced menu section. Normally changes
to the dispersion distribution are straightforward. However, there are some considerations with regard to the initial
number of particles released. The default release is set to be 2500 particles over the duration of the emission cycle (see
NUMPAR). A 3-dimensional (3D) particle simulation requires many more particles to simulate the proper pollutant
distribution, the number depending upon the maximum downwind distance of the simulation and the duration of the
release, longer in each case require more particles. Too few particles result in noisy concentration fields. A 3D puff
simulation starts with one puff as the puff-splitting process in conjunction with the vertical dispersion quickly generates
a sufficient number of puffs to represent the complex dispersion process.

Continuous Emissions

As noted above the default release is 2500 particles over the duration of the emission cycle. If continuous emissions are
specified (e.g. over the duration of the simulation), then those 2500 particles are spread out over that time period. This
may easily result in the release of too few particles each hour to provide smooth temporal changes in the concentration
field. Imagine a single particle passing in and out of the vertical concentration cell due to turbulent diffusion. One
solution would be to increase the NUMPAR parameter until smoother results are obtained. Another possibility would be
to cycle the emissions by emitting particles only for the first time step of each hour. Those particles would contain the
total mass for a one-hour release (see how to set QCYCLE).

Time Variation of the Emission Rate

One way to incorporate a time varying emission rate into the existing model structure is to use the particle dump feature
to restart the model each time with a new emission rate. Another option is to assign the name of a temporal emission
input file to the "EFILE" variable in the setup.cfg namelist file. This ASCII file must consist of at least three records, the
first two of which are used for identification purposes, and the third, and all subsequent records, define the temporal
sequence of emissions. Each emission record contains the start time, duration, location, and emission rate. If the EFILE
is present, the first emission record's values replace the emission values set in the control file. Once the model
computation time has passed the emission period defined on the first emission record, the emission data from the second
record are loaded and the calculation continues with the new emission data. The format of the emission file is given
below:

Record #3 -> end

I4 - Start year
I3 - Start month
I3 - Start day
I3 - Start hour
I2 - Start minute
I3 - Duration hours
I2 - Duration minutes
F6.2 - Latitude
F8.2 - Longitude
F8.0 - Emission rate in units/hour
Time-Varying Unit Source Simulations

The previous approach defined a time-varying emissions file to use with a single simulation. An alternate approach is
run the model with a unit source emission rate and in a post-processing step, apply a time-varying emissions factor. This
can be accomplished by multiple simulations, one for each emission time period, and then the concentration output
results can be added together or through a single simulation where individual release time periods are tagged as distinct
pollutants. This is accomplished using the namelist parameters QCYCLE and ICHEM.

As an example, if hourly (the minimum) resolution is desired over a 24 hour simulation period, then the emission rate
needs to be set to emit one one unit for a duration of one hour. The namelist variable QCYCLE should also be set to 1.0
hour so that the emissions effectively become continuous at a rate of one unit per hour. This in combination with
ICHEM=10 will result in the creation of 24 concentration arrays with each particle being tagged according to its release
hour and those particles will only contribute to concentrations on the grid with the same release-time tag. The pollutant
identification field is used and its value corresponds to hours after the start of the simulation. The output concentration
grid will appear to be like any other but with 24 pollutants, one for each release period as defined by the QCYCLE
parameter. Make sure that a sufficient number of particles have been released to provide consistent results for all time
periods.

The output can be processed like any other concentration file by selecting the specific pollutant (= release time) or by
two different post-processing applications that are configured to decode the output file. The apply source concentration
utility menu multiplies each concentration field with its associated emission rate defined in an external file. Another
option is to use the convert to station menu to extract the time series of concentrations contributed by each release time
to each sampling period for a specific pre-selected location. The TCM format checkbox should be selected to enable
the proper conversion option.

Area Source Emissions

Normally emissions are assumed to be point or vertical line sources. Virtual point sources (initial source area >0) can be
defined two ways: 1) through the definition of an initial area on the source location input line of the CONTROL file or
2) by the definition of a gridded emissions file. If the model's root startup directory contains the file Emission.txt, then
the pollutants are emitted from each grid cell according to the definitions previously set in the Control file. Two source
points should be selected, which define the lower left (1st point) and upper right (2nd point) corner of the emissions grid
that will be used in the simulation. This should be a subset of the grid defined in emission.txt. The release height
represents the height from the ground through which pollutants will be initially distributed. Note that the structure of the
"emission.txt " file has changed with the HYSPLIT 4.6 revision of October 2003.

The "emission.txt" file contains all the information that is required to interpret the data in the gridded emission
inventory file. The file that contains the inventory is now independent of the emission.txt file. The file's first record
contains information about the internal grid cell size that is used by the dispersion model to accumulate the file's
emissions. The emission file defines the emissions at latitude-longitude points, which may represent the emissions from
an area or from a point. The values at these points are accumulated in an internal grid, the size of which is defined on
the first record. This value can be arbitrarily changed according to the desired resolution of the simulation. The pollutant
puffs are released with an initial size comparable to the accumulation cell size. Because the emission file data are
remapped to an internal grid, the file can consist of emissions data on a regular grid or just a collection of individual
cells. The emission rate in the Control file is used as an additional multiplication factor for the data in the emission file.
Also note that previously discussed particle number restrictions still apply. The particles are spread out over the duration
of the emission and the number of grid cells that are defined in the emission domain. The format of the emission.txt file
is given below:

Record #1

I4 - Number (n) of pollutant species in file


I4 - Number of emissions defined for each 24 hour period
F10.4 - Conversion factor: file units to model units/hour
2F10.4 - Accumulation cell size (latitude & longitude)

Record #2

nA4 - Four character pollutant identification string for each pollutant

Record #3

A - The /directory/filename of the emission data file

The actual emission data file will contain one record identifying the grid location and then two records for each
pollutant species. The first record defines emissions from GMT hours 0 to 12 and the second record from hours 12 to
24. This pair of records is repeated for each pollutant species:

Records Loop #1 to the number of i,j grid point

2I4 - I,J grid point of emission cell (arbitrary units for identification)
2F10.4 - Southwest corner Longitude and Latitude of this emission cell

Record Loop #2 to the number of pollutant species

12E10.3 - emissions for pollutant#1 hours 1-12


12E10.3 - emissions for pollutant#1 hours 13-24

Multiple Pollutants

The model can easily be configured to simulate more complex pollutant episodes with multiple pollutant types on
different particles or multiple pollutant species on the same particle. The former is accomplished by defining additional
pollutants in the CONTROL file. In this configuration, multiple species are emitted, have no interaction, and may track
differently. This situation may represent a volcanic ash plume, where each pollutant, a different sized particle, settles at
a different rate. An example configuration control_volcano can be retrieved in the SETUP menu.

Pollutant Transformations

In the latter situation, when multiple pollutants are defined on the same particle, an external chemistry routine is
required that converts mass from one species to another, all tracking together (advecting and dispersing). In this
situation, MAXDIM should be raised to the required value. Increasing the MAXDIM value always requires an external
routine to adjust the mass between species. A simple species conversion program is included with the standard model
distribution. In the default configuration it is only necessary to define two different pollutants in the concentration setup
menu and select the [fraction] /hr checkbox in the advanced configuration menu's conversion section. The default
fraction is 0.10 (10% per hour). This option automatically sets MAXDIM=2 in the model and calls the transformation
routine every time step to convert pollutant #1 to #2 at the defined rate per hour.

Other conversion rates or a greater number of pollutants can be defined by editing the CHEMRATE.TXT file in the local
directory. This file is created automatically each time the Save button is pressed when the ICHEM=2 option is selected.
The file is only required when conversions other than the default case. The file consists of one or more records, each
record defining a pollutant conversion. The data are free-format and consist of four fields, the integer "from" and "to"
pollutant index numbers, and the real hourly conversion "rate", and molecular weight adjustment "factor". For instance,
if the file were to be defined for the default case, the one data record would have the following values: (1 2 0.10 1.0).
The molecular weight adjustment factor can be used to account for other reactions not considered in the simple
conversion module. For instance, if one were to define pollutants #1 and #2 as SO2 and SO4, respectively, then the
molecular weight adjustment factor should be 1.5 as SO2 transforms to SO4 (the conversion picks up two additional
oxygen molecules).

Complex Chemistry
Although there are other more complex chemical conversion modules available for HYSPLIT, they are not incorporated
into the standard compilation. More information on these special compilations may be found at
http://www.ready.noaa.gov/HYSPLIT_pcchem.php

One feature, required for all these modules, is that there is a more complex interaction between the individual pollutant
plumes, requiring a close link between the concentration grid and the meteorological data grid. This option is available
in the standard model compilation. By setting the namelist file parameter ICHEM=4, the concentration grid is redefined
to be equal to the meteorological data grid in terms of spatial resolution and extent. This simplifies the computation of
the grid based chemical reactions that are dependent upon the meteorological conditions within each concentration grid
cell.

Deposition and Decay

A simple particle deposition configuration (control_nuclear) for radioactive Cs-137 can be retrieved into the SETUP
menu, which shows the default settings for radioactive decay and wet and dry deposition. In conjunction, a list of sites
can be loaded into any SETUP menu, from the "Set Starting Locations" tab by pressing the LIST button. The site
locations can be found in the file "\working\plants.txt" and could be replaced by any user generated location file listing.

The normal deposition mode is for particles to loose mass to deposition when those particles are within the deposition
layer. An additional option was added to deposit the entire particle's mass at the surface (the particle is removed) when
subjected to deposition. To insure the same mass removal rates between the two methods, a probability of deposition is
computed, so that only a fraction of the particles within the deposition layer are deposited in any one time step. The
probability of deposition is a function of the deposition velocity, time step, and depth of the layer. One limitation of this
method is that only one mass species may be assigned to a particle. The probability deposition method can be invoked
from the namelist file with ICHEM=5.

Compilation Limits

With HYSPLIT V4.5 most compilation array limits have been eliminated through the use of dynamic array allocation.
However, one restriction remains with regard to the meteorological input data: if only a single value of grids is
specified, the maximum number of meteorological data files is limited to a maximum of 12 per simulation.

While not available in the GUI, if the user is creating the CONTROL file themselves, then two numbers can be
specified in the CONTROL file on the line for the "number of input data grids": the 1st being the number of unique
grids and the 2nd being the number of files in each grid. For example, an entry of 2 12 would mean that there are met
files for 2 different grids (e.g., a regional and a global grid), and that there are 12 files being specified for each grid. The
grids should be specified in order of resolution, with the highest resolution grids (i.e, the smallest horizontal spacing
between grid points) being specified before lower resolution grids. The two entries for each file (directory and filename)
are repeated for each file in the first grid, and then for each file in the second grid, and so on, for any subsequent grids.
Note that the same number of files are required for each grid in this approach. Without the use of this approach (i.e.,
when only one number is specified) the maximum number of files that can be used in the simulation is relatively small,
but with this second approach, a much larger number of files can be used in the simulation. Current compilation limits
allow 12 unique grids and up to 128 files for each grid.

Finally, with current compilation limits, there can be no more than 75 levels or 35 variables in each file. These last
restrictions do not limit any computation with data files available through the ARL web site, because all available data
files meet the number of variable and levels restriction.

The use of dynamic memory allocation can result in unpredictable results if the computer's hardware memory limits are
exceeded. Although there are several memory error allocation traps that will result in a message and execution
termination, memory limits can be exceeded in a variety of different locations, such as when opening a file. Memory
usage is a primarily a function of the meteorological sub-grid size, meteorological data grid size, concentration grid size,
and the number of pollutants.

Script Automation and Configuration


Most of the discussion in various sections of the User's Guide are tailored to individually configured simulations.
However there are several features to the model that can be used to automate the computational environment. For
instance, a sample Auto_traj.tcl script is provided in the /examples/scripts/tcl directory that can be used as a guide to
automate many applications.

# Auto_traj.tcl
# the next line restarts using wish
# exec wish "$0" "$@"
set Start_hgt "10.0"
set Traj_path "../exec"
set Start_time "00 00 00 00"
set Run_hours "24"
set Vert_coord "0"
set Top_model "10000.0"
set Meteo_path "../metdata/"
set Meteo_file "oct1618.BIN"
set Output_path "./"
set Output_base "tdump"
set Output_numb 1
foreach {Start_lat Start_lon} {35.0 -90.0 40.0 -90.0 45.0 -90.0} {
set Start_loc "$Start_lat $Start_lon $Start_hgt"
set Output_file "$Output_base$Output_numb"
file delete Control
set f [open Control w]
puts $f "$Start_time"
puts $f "1"
puts $f "$Start_loc"
puts $f "$Run_hours"
puts $f "$Vert_coord"
puts $f "$Top_model"
puts $f "1"
puts $f "$Meteo_path"
puts $f "$Meteo_file"
puts $f "$Output_path"
puts $f "$Output_file"
close $f
exec "$Traj_path/hyts_std.exe"
incr Output_numb}

In this particular example the test trajectory case is run for three different starting locations, each simulation writing a
new endpoints file with a unique file name. The CONTROL file is recreated for each simulation. It would be trivial to
rewrite the script to set the latitude-longitude and loop through a different number of starting days and hours. With
Tcl/Tk installed, this script can be run under Windows or Unix. For instance, to compute new forecast trajectories each
day, the process can be automated by including a data FTP at the beginning of the script to get the most recent
meteorological forecast file, setting the starting time as "00 00 00 00" so that the trajectories will start at the beginning
of the file, and finally calling the script once-a-day though the Unix crontable or the Window's scheduler commands.

One problem with automated operations is that it is possible to generate simultaneous multiple jobs which may interfere
with each other. The executables, hycs_std and hyts_std have a command line option of adding the process ID (PID):
e.g. hyts_std [PID]. In this situation all standard named input and output files [those not defined in the Control file] have
the PID added as a suffix to the file name: e.g. Control.[PID], Setup.[PID], Message.[PID].

An example of another type of operational configuration is the extended simulation of a pollutant emission using
archive data to bring the simulation to the current time and then using forecast meteorological data to provide a daily
projection. Each day the archive simulation must be updated with the new archive data and a new forecast product
generated. This process can also be automated through a script, but for illustration purposes one can use the advanced
features of the GUI to configure such a case. Assume a one-hour duration accidental pollutant release that occurred 48
hours prior to the current time. The following sequence applies:

1) From the "Meteorology" menu tab download the appropriate archive meteorological data and the most recent
forecast meteorological data (assume it is available to +48h).

2) Setup the concentration simulation to run 96 hours using two meteorological files starting with the archive data
and then switching to the forecast data.

3) Under the Advanced menu tab and Configuration Setup write the initialization file after 72 hours.

4) Run the model.

At the completion of the simulation you will have the plume projection from release (-48 h) through the current forecast
(+48 h). The PARDUMP file will contain all the endpoint positions at +24 hours, corresponding to the initialization time
of when the next forecast will be available (assume there is one forecast per day).

The next day, when the new forecast data are available, reconfigure the model to run only with the forecast
meteorological data for a duration of 48 hours. Then write the initialization file after 24 hours and run the model to
obtain the new projection. In this second part, we assume that the first 24 hours of the forecast are not much different
than the analysis. In practice, this procedure can be run at the same frequency that the new forecast data are available,
typically 4 times per day. Data at the initial forecast hour are identical to the analysis data.

Source Attribution with Dispersion

A common application of atmospheric trajectory and dispersion models is to try to determine the source of a pollution
measurement. If a high value has been collected at a particular receptor, from which pollutant source region did the air
originate? One approach is to calculate the trajectory "backwards" from the receptor site. In the trajectory calculation
this is accomplished by setting the integration time step to a negative value. However, the trajectory only represents the
upwind path of a single point, while the pollutant measurement may require of hundreds or thousands of trajectories to
represent the dispersion of the pollutant in time and space.

Another approach is the run the entire dispersion-trajectory model upstream (backwards), which is computationally
attractive because in a 3D particle model the dispersion process is represented by a turbulent component added to the
trajectory calculation and the advection process (the trajectory) is fully reversible. The trajectory equation can be
correctly integrated in either direction. The interpretation of the output is a bit more complex because dispersion is an
irreversible process. The upstream numerical calculation will yield a result because the integration of the dispersion
equation is still in the downstream mode while the advection is integrated backward upstream. The meaning of the
upwind dispersion result is not as easily interpreted as the downwind calculation. In any event, as noted in the earlier
instructions, it is possible to run the dispersion model "backwards" by setting the run duration to its equivalent negative
value. The stop time of the sampling should be set prior to the start time. All start and stop times should be set to their
exact values - relative start-stop times are not supported in the backward mode.

Transport of Particles Deposited on Water Surfaces

The main code was modified (October 2003 Version 4.6) to permit particles deposited on water surfaces to continue to
be transported on the water surface by the wind generated drift current. The transport output is treated as a deposition
surface for display purposes. This new deposition method then creates particles that can be transported on water
surfaces. Particles can be deposited on any surface. However, if the surface is defined as water, then the particle is
assigned a unique identification code to distinguish it from atmospheric particles or puffs. These new particles may
continue to be transported along the surface of the water contributing to deposition each time step but not air
concentration. Dispersion is not computed for these particles. When they approach a land surface they are deleted. The
water surface transport option is invoked from the namelist file with ICHEM=7. This option automatically forces the
probability deposition computation (ICHEM=5) and should only be used only with the 3D particle mode (IN ITD=0).
Surface water deposition can only be displayed if the deposition output level (0) is defined. Although particles may
deposit over land, over-land deposition values are never shown.

The wind induced surface water drift current is assumed to equal the vector atmospheric friction velocity. The friction
velocity represents the momentum transport to the surface and it is an approximation of the surface water movement.
Currently only the GFS meteorological model output file contains the vector momentum flux components.

Mixing Ratio Output

Setting this option forces the concentration summation calls within the main program to integrate the mass divided by
the air density (kg/m3) rather than just the mass, thereby permitting the output fields to be more easily converted from
mass/volume to mixing ratio. This option would most likely be used in conjunction with the CMASS namelist variable
that can be set to sum mass/volume or just mass.

STILT mode

The STILT model incorporates the variation of HYSPLIT developed by Lin et al. (2003 - JGR, VOL. 108, NO. D16,
4493, doi:10.1029/2002JD003161) that can be used to estimate upwind surface fluxes from atmospheric measurements.
Two changes are introduced; the mass summation is divided by air density resulting in a mixing ratio output field
(ICHEM=6) and the lowest concentration summation layer (concentration layer top depth) is permitted to vary with the
mixed layer depth (ICHEM=9). The ICHEM=8 switch turns on both density and varying layer depth. Two text files of
particle position information (PARTICLE.DAT and PARTICLE_STILT.DAT) at each time step will also be created
unless the namelist parameter OUTDT defining the output interval (min) is changed. PARTICLE_STILT.DAT follows
the same format as STILT. The footprint output in PARTICLE.DAT represents particles that were below 50% of the
mixed layer height. The footprint in PARTICLE_STILT.DAT represent particles that were below a user defined height
(VEGHT).

Concentration layer varies with mixed layer depth

This option changes the height of the lowest concentration layer from a fixed value to a variable, which changes
according to the mixed layer depth each time step. The depth, as a fraction of the mixed layer, is computed by the height
value of the lowest level, where the height is interpreted in hundreds, such that the a fraction representing 0.5 of the
mixed layer depth would be entered as 50. Note that in this mode it may be desired to change the default minimum
mixing depth KMIX0 from its default value of 250 m, which would result in a minimum sampling layer of 125 m.
Concentration levels above the first level are unaffected by this option.

Table of Contents
Advanced / Configuration Setup
This section provides some guidance in configuring the model input to perform certain specialized calculations. The
default configuration supplied with the test meteorological data is confined to a simple trajectory and inert transport and
dispersion calculation. More complex scenarios can be configured through the Advanced / Configuration Setup menu
tabs. There are seven sub menus available.

1. The Trajectory tab opens a menu creation or modification of the SETUP.CFG namelist file. This can be
accomplished through the GUI or by directly editing the namelist file. The file is required to be in the model
startup directory, usually ./working when running the model through the GUI. More information about the
trajectory namelist variables can be found in the trajectories help file.

2. The Concentration tab opens a similar menu to the trajectory tab, but with many more options. The trajectory
and concentration menus are confined to the variables relevant to the respective model calculation. More
information about the concentration namelist variables can be found in the concentration help file.

3. The Global tab opens a menu to create or edit a special namelist file that configures the Global Eulerian Model
(GEM), a series of subroutines within the main HYSPLIT transport and dispersion code. Particles or puffs are
always first released in the Lagrangian framework and carried within HYSPLIT until they exceed a certain age at
which point their mass is transferred to the GEM routines. Particles can be released and transferred for the entire
duration of the simulation.

4. The Dynamic Sampling menu is used to configure a special virtual sampler that can move in space and time
through the computational domain during the concentration simulation. Dynamic samplers can move passively
with the wind or have a define pathway, similar to an airplane.

5. The Emissions File opens a menu to edit or create a point-source emission file that can define more complex
release scenarios than could be created using just the CONTROL file. Simple scenarios like time-varying
emissions to more complex moving line or point sources cab be configured.

6. The Set Directories menu is used to configure the GUI's directory structure as well as the location and name of
certain executable programs used by the GUI. The structure is written to the default_exec file. If that file is not
found in the startup directory, the GUI attempts to set all the values. Significant differences are found between
Windows, Apple, and UNIX installations.

7. The Extra Labeling menu opens a simple editor menu to create a file of extra label information that is added to
the bottom of all trajectory or concentration plots. Only these two display programs support the extra label
function.

Table of Contents
Advanced / Configuration Setup / Trajectory
This section provides some guidance in configuring the model input to perform certain specialized calculations. The
default configuration supplied with the test meteorological data is confined to a simple trajectory calculation. More
complex scenarios can be configured through the Advanced, Configuration Setup, Trajectory menu tab. The menu is
used to modify the SETUP.CFG namelist file. This file is not required, and if not present in the root startup directory,
default values are used. These parameters can all be changed without recompilation by modification of the contents of
SETUP.CFG and in some cases their modification will substantially change the nature of the simulation. The
configuration file should be present in the root directory. An illustration of the menu is shown below.

When using the GUI, the namelist file will be deleted and all variables are returned to their default value by selecting
the "Reset" button. The following summarizes the namelist variables and their corresponding section in the GUI menu.

1. Set fixed or automatic TIME STEPS


2. Define subgrid and MSL/AGL UNITS
3. MULTIPLE trajectories in time
4. Trajectory points OUTPUT FREQUENCY
5. MIXING DEPTH computation method
6. Add METEOROLOGY output along trajectory
7. Meteorological grid offset ENSEMBLE
8. WRF vertical interpolation
9. Summary of all NAMELIST variable defaults

Table of Contents
Advanced / Configuration Setup / Concentration
This section provides some guidance in configuring the model input to perform certain specialized calculations. The
default configuration supplied with the test meteorological data is confined to a simple dispersion calculation. More
complex scenarios can be configured through the Advanced / Configuration / Concentration Setup menu tab. The menu
is used to modify the SETUP.CFG namelist file. This file is not required, and if not present in the root startup directory,
default values are used. These parameters can all be changed without recompilation by modification of the contents of
SETUP.CFG and in some cases their modification will substantially change the nature of the simulation. The
configuration file should be present in the root directory. An illustration of the upper-level menu is shown below.

When using the GUI, the namelist file will be deleted and all variables are returned to their default value by selecting
the "Reset" button. The following summarizes the namelist variables and their corresponding section in the GUI menu.
Not all variables can be set through the menu system but may be modified by directly editing the namelist file.

1. Set fixed or automatic TIME STEPS


2. Define subgrid and MSL/AGL UNITS
3. Configure release of PARTICLES or PUFFS
4. Set the particle/puff RELEASE NUMBER limits
5. Set the puff SPLIT-MERGE parameters
6. Define EMISSION CYCLING or input file
7. Configure the TURBULENCE method
8. Concentration GRID PACKING method
9. Input and output PARTICLE FILES
10. In-line chemical CONVERSION MODULES
11. Meteorological grid offset ENSEMBLE
12. Output CENTER-OF-MASS trajectory
13. WRF vertical interpolation
14. Summary of all NAMELIST variable defaults

Table of Contents
Advanced / File Edit / Dynamic Sampling
This menu creates the optional LAGSET.CFG file which is used to configure a dynamic sampler. A dynamic sampler is
defined as a moving sampler that can pass through the model simulation domain, either passively with the wind
(Lagrangian mode) or by using a pre-defined velocity vector (Forced mode). In the current version the dynamic sampler
transport is always vertically isobaric regardless of the vertical motion method selected for the pollutant transport. The
dynamic sampler samples model produced values that are generated internally on a snapshot concentration grid.

The first menu tab defaults to the configuration for one sampler. If multiple samplers are required, then enter the number
in this menu. After pressing the Configure Samplers button, another menu comes up to select the sampler number to
configure. Pressing the sampler number button brings up the menu shown below. Each defined sampler must be
configured according to the following instructions.

The dynamic sampler configuration file must be located in the model's startup directory to be found and the name
should always be in uppercase. The following is an example of the contents of LAGSET.CFG for one dynamic sampler.
Multiple samplers would repeat the last seven line sequence.

1 : number of dynamic samplers

40.0 -90.0 500.0 : release location and height (agl)

0.0 0.0 : force vector - direction, speed (m/s)

95 10 16 00 00 : release start - year month day hour minute

95 10 16 00 00 : sampling start - year month day hour minute

00 : sample averaging (min)

60 : disk file output interval (min)

'LAGOUT.TXT' : sampler output file


Number of dynamic samplers: Due to file unit number restrictions, the current version only supports 9 simultaneous
sampler definitions. The following seven lines need to be repeated for every defined sampler.

Release location and height: The location is the latitude-longitude point at which the trajectory of the sampler path is
started. Although the subsequent vertical motion is isobaric, the initial starting height must be defined in meters above
ground-level. Note that the isobaric sampler trajectory will not be identical to the HYSPLIT trajectory model isobaric
trajectory because of differences in how the vertical motion is computed between the two models.

Force vector: If the direction (downwind) and speed (m/s) set to zero, then the model computes the sampler trajectory
according to the meteorological input data wind velocity. If any of these values are not zero, then the sampler trajectory
is computed using these values for its entire length. To simulate a more complex aircraft sampling path, each leg of the
flight pattern requires its own sampler definition. For instance, an aircraft flying east with have a direction vector of
090.

Release start: The release time of the sampler gives the date and time that sampler starts from the previously defined
release location. Note that in the current version, zero's for this field (relative start time) are not supported.

Sampling start: The sampler may proceed on its trajectory for quite some time before sampling is started. The sampler
start time must be greater than or equal to the release start time.

Sample averaging (minutes): At every computational time step the model determines the sampler position in the
concentration grid and accumulates the concentration value in memory. When the sampler accumulation time reaches
the sample averaging time, all sums are reset to zero. A sample averaging time of zero means that the sample is only
accumulated for one model integration time step.

Disk file output interval (minutes): At the disk output interval, the sampler concentration values are written to the output
file defined on the next input line. The value written is the accumulated concentration divided by the accumulated
minutes. The disk output interval is independent from the sample averaging time.

Sampler output file: The directory and file name should be enclosed in single quotes. If the file is defined without a
directory it is written to the model startup directory.

Concentration Grid Configuration

Note that dynamic sampling will only work if the sampler trajectory passes through a concentration grid covering the
region of interest. That means that a concentration grid of sufficient resolution, in both the horizontal and vertical is
required for a sampler to capture the pollutant. However too much resolution (too fine a grid) may mean that there could
be an insufficient number of pollutant particles to provide a uniform distribution and therefore the sampling could
provide unrepresentative results. The concentration grid is required to be defined as a snapshot grid. In the current
version, only one pollutant species per simulation is supported.

Table of Contents
Advanced / File Edit / Emissions File
This menu creates the optional EMITIMES file which is used to configure more complex point source emissions
scenarios. In the standard model simulation, the CONTROL file can only be used to define one pollutant release cycle
which applies equally to all source locations. Although multiple release cycles can be defined, they must all be at the
same interval. Using the EMITIMES file to define the point source emissions, multiple release locations can each have
their own emission characteristics, each with different pollutants, if desired. Furthermore, multiple emission cycles, at
non-regular intervals can also be defined. By appropriately locating multiple sources in space and time, line- source as
well as other non-regular emissions configurations can be created. In version 4.8 the format of this file has changed
from previous HYSPLIT versions to such an extent that they are incompatible with each other. In all versions the file
name is defined by the EFILE variable in the namelist configuration file created thorough the Advanced / Configuration
menu tab.

The first menu tab defaults to the configuration for one source. If multiple sources are required, then enter the number in
this menu. After pressing the Configure Locations button, another menu comes up to select the location number to
configure. Pressing the location number button brings up the menu shown below. The GUI menu only supports the
creation of a file for one pollutant for one emission cycle. If multiple pollutants are defined, or multiple cycles are
required, then the file must be edited manually by duplicating the emission record at each location for all pollutants in
the order they are defined in the CONTROL file. Each defined release location must be configured according to the
following instructions. The point source emissions file must be located in the startup directory the name should always
be in uppercase. The following is an example of the contents of EMITIMES file for one location. Multiple locations
would have one line per location. The number of data records should equal the number of sources defined in the
CONTROL file times the number of pollutants released.

The EMITIMES file may also be configured to construct a vertical line source by having two consecutive emission
points defined at the same spatial location but each with different heights. Unlike the vertical line source definition
through the CONTROL file, where the same emission rate is defined for all sources, here the emission rate may be
varied through the column. For instance, if the source record N is at the same location as record N-1, the the emission
rate defined for record N-1 will be used for all particles released in the column from N-1 to N. This means that the
emission value given for the last record, in a series of records at the same location will not be used.

Record 1 - Identification record describing the emission cycle header record


Record 2 - Identification record describing a location emission data record
Record 3 - First emission cycle header record
Record 4 - First emission data record in the first emission cycle

Emission Cycle Header Record: {YYYY} {MM} {DD} {HH} {hhhh} {#rec}

{YYYY} {MM} {DD} {HH} - Starting time of the emission cycle, with each new emission cycle, all previous
emission records are replaced with those in the new cycle.
{hhhh} - Duration in hours that this emission cycle is valid.
{#rec} - Number of emission records in this cycle (= # sources times # pollutants).

Emission Cycle Data Record: {YYYY} {MM} {DD} {HH} {mm} {HHmm} {Lat} {Lon} {Hgt} {Rate} {Area}
{Heat}

{YYYY} {MM} {DD} {HH} {mm} - Emission start time at this location
{HHmm} - Release duration in hours and minutes (no space).
{Lat} {Lon} {Hgt} - Position and height of the release.
{Rate} - Emission rate in mass units per hour.
{Area} - Emission area in square meters. Zero is treated as a point source.
{Heat} - Heat released during the emission or fire radiative power for namelist option PLRISE = 1 or 2,
respectively. Non-zero values result in a buoyancy plume rise calculation, replacing the previous height value.

The emission cycle header record defines the valid time period for the subsequent data records. At the end of the header
record time period {hhhh}, the model will attempt to read the next emission cycle header record. If no header record is
found, then the EMITIMES emission processing is terminated and the model reverts to using the emission values
defined in the CONTROL file. For instance, this means that if a short emission duration is defined in the data records,
but the model simulation covers a longer period, the header record duration {hhhh} should be long enough to cover the
entire simulation period. Unless it is intended to use both methods, the emission rate and duration in the CONTROL file
should both be set to zero when using an EMITIMES file.

Emission Area

For computational particles (INITD=0,103,104) the emission area is a square by default. For version v5.1.0 and later,
the AREA namelist option may be used to specify a circular emission area for computational particles, AREA=1.
Particles are distributed randomly with a uniform distribution in the emission area. For versions earlier than v5.1.0 the
square area was smaller than the specified area by a factor of 1/π.
For Gaussian puffs, the specified area is the area out to 1.54σ. Hence, the whole puff will cover a larger area. The
standard deviation for the Gaussian puff is calculated from σ = (A/π)1/2 / 1.54. Where A is the specified area in square
meters.

Special Case: Backward simulations

In the case of backward simulations, the cycle header record should point to the first (oldest) emission cycle and all
subsequent emission cycles should be earlier in time. The emission start time within a cycle corresponds to a time such
that the duration of emissions proceeds backward from that starting time. The internal backward flag has already been
set from the negative run duration in the CONTROL file and therefore all time durations in the EMITIMES file should
be expressed as positive numbers.

Table of Contents
Advanced / Configuration Setup / Set Directories
The installation program installs all code and executables to your selected directory, and creates a shortcut on the
Window's desktop to /guicode/hysplit.tcl with the Start In directory as your selected default. On Unix systems the link
would appear in the /working directory. When the GUI is first started, it looks for the supplemental programs such as
web browser, ImageMagick, and Tcl/tk in certain standard directories, and writes those paths to a file called
default_exec. If this file already exists, those locations are used instead of the default search paths. The directories
specified in this file may be edited through the Set Directories menu tab. Copying the default_exec file to another
computer with a different directory structure may cause the GUI to fail to open. In these situations, either delete the file,
or manually edit the file to reflect the correct directory structure.

The default_exec values either reflect a directory location, a file name, or a directory/executable name. The content is
mostly self-explanatory. Under UNIX systems, the X-windows directory should point to the location of the xterm
program. This is optional. If the directory is defined, then many of the programs called by the GUI will run in their own
xterm window.

The Anaconda3 environment path is necessary for Python scripts. With Anaconda, users can create an environment and
manage python packages in it. Each environment is self-contained and users can test out a new version of package
without affecting other environments. For hysplit use, it is recommended to create the hysplit environment and have the
Anaconda3 environment path on the graphical user interface point to the environment. For Windows 10 users, the
environment path would be C:/Users/YOUR_USER_NAME/AppData/Local/Continuum/anaconda3/envs/hysplit when
default values were used for Anaconda installation.

Table of Contents
Advanced / File Edit / Panel Labels
If a file called MAPTEXT.CFG is found in the root or working directory during the execution of either the trajectory or
concentration plotting program, additional label information is written at the bottom of each graphic. The Extra Label
menu tab can be used to edit this information. An illustration of the menu is shown below, followed by the resulting
trajectory graphic for the example simulation. Note that the menu entries are entirely text based and there are no
restrictions regarding content. However, not all plotting programs display all lines. The example header text indicates
the appropriate application.
Table of Contents
Advanced / File Edit / Border Labels
Many of the HYSPLIT plotting programs have label information that can be customized to some extent. This is
accomplished by placing a file called LABELS.CFG in the working directory which should contain one or more of the
following valid entries (all in single quotes terminated by &). Replace the second text string with the new desired text.
Not all label strings are valid with every plotting program. This file can be created and edited through the menu system.
Quotes and string terminators are not required in the edit menu.

'TITLE&','NOAA AIR RESOURCES LABORATORY&'


'MAPID&','Air Concentration&'
'LAYER&','Layer Averaged&'
'UNITS&','Bq&'
'VOLUM&','/m3&'
'TZONE&','UTC+00&'
'RELEASE&','Release info&'
'ALLTD&','m&'

The time zone variable can be used to force a specific time zone for the air concentration contour plot. All calculations
are performed in UTC, but prior to plotting, all times are converted by the time zone adjustment. The field should
always be six characters in length. The first three represent the time zone label, such as UTC, the next character is the
sign field (+ to the east or - to the west), and last two digits are the hours difference of the time zone from UTC.

Altitude units (ALTTD) may be set to feet for concentration output plots.

Table of Contents
Advanced / File Edit / FMDV Decay
Foot and Mouth Disease

[email protected], [email protected], April 2015

Foot and Mouth Disease is a contagious viral disease of cattle and sheep, causing ulceration of the hoofs and around the
mouth. Evidence has shown[1] that the Foot and Mouth Disease Virus (FMDV) can be transported via airborne means
making even a single outbreak highly infectious.

HYSPLIT is capable of modelling the dispersion of airborne FMDV. The survival of the FMDV particle is dependent on (a)
the age of the particle, the virus has a natural lifespan which is simulated within the model with a half-life factor, (b) the air
temperature, retrieved from the meteorological file and (c) the air humidity also retrieved from the meteorological file. In
reality, wet and dry deposition also influence the distance travelled by the airborne FMDV.

To activate the Foot and Mouth Disease algorithm in HYSPLIT the species identification label must be set to either FM_0 or
FM_1 or .... FM_7 where the integer behaves as a binary switch for including or excluding particle aging, temperature or
humidity dependence as desired.

The Table below shows the dependences of Foot and Mouth virus survival to the species identification string. FM_7 is the
most realistic option.

Identification Particle Aging Temperature Humidity


FM_0 False False False
FM_1 False False True
FM_2 False True False
FM_3 False True True
FM_4 True False False
FM_5 True False True
FM_6 True True False
FM_7 True True True

The species identification string can be set in the pollutant emission rate setup menu or automatically using the FMDV
radiobutton in the deposition setup menu in the following section.
Default FMDV settings

- Deposition

The species identification string can also be set to a default value by selecting the FMDV radiobutton in the deposition
menu. This button will also set default values for wet and dry deposition. For the Foot and Mouth virus a dry deposition
velocity of 0.01m/s is assumed[1], but it can be changed by the user if required. FMDV wet deposition defaults are simply
those for small particles using the removal time constant of 8x10-5 for both in-cloud and below-cloud removal which is
similar to the value (5x10-5) used by Garner[1].

- Particle age

Various publications use a virus decay constant to simulate particle ageing. For the Foot and Mouth virus estimates of an
effective half-life range from 30 mins[1] to 2 hours[2]. This constant is also dependent on the strain of the virus. To be
conservative the virus decay constant has been defined to be 2 hours by default but this can be easily redefined by the user in
the advanced settings menu (see below). Particle ageing can be turned off altogether if desired by using the species
identification label outlined above.

- Temperature

The survival of the virus depends on temperature. Taking the default temperature threshold of 24oC, all virus particles of
24oC and under are unaffected. The concentration mass of virus particles linearly decreases between 24oC and 30oC so that
none remain by the time they reach 30oC. This threshold of 24oC can be user defined in advanced settings (preserving the
6oC linear fall off) or the temperature dependence can be turned off altogether if required by selecting the appropriate
species identification string outlined above.

- Humidity

The survival of the virus also depends on humidity. The virus survives better in higher humidity. Virus concentrations with
relative humidity higher than 60% are left unaffected. Concentrations decrease exponentially as the humidity falls from 60%
to 1%. This threshold of 60% can be user defined in the advanced settings or the humidity dependence can be turned off
altogether if required by selecting the appropriate species identification string outlined above.

FMDV Survival Dependence


... upon Temperature ... upon Humidity

How to change default FMDV dependency thresholds

There are many different variants of FMDV each with their own characteristics. If a different set of threshold dependencies
are know the default FMDV can be changed. To do this open the Advanced / File Edit / FMDV Decay menu to edit the
decay, and critical temperature and humidity defaults. This information is written into the fm_param.txt file which is read
when the HYSPLIT calculation starts.

References

[1] M.G. Garner, G.D. Hess and X. Yang, An integrated modelling approach to access the risk of wind-borne spread of foot-
and-mouth disease virus from infected premises, Environmental Modelling & Assessment (2006) 11: 195-207.

[2] J.H. Sorensen, C.O. Jensen, T. Mikkelsen, D.K.J. Mackay and A.I. Donaldson, Modelling the Atmospheric Dispersion of
Foot and Mouth Virus for Emergency Preparedness, Phys. Chem. Earth (B), Vol. 26, No. 2, pp. 93-97, 2001.

Table of Contents
Advanced / File Edit / User Defined Internal Vertical Levels
By default, HYSPLIT analyzes the meteorological inputs to determine the appropriate internal vertical model levels so
that there are sufficient levels to interpolate all the meteorological input levels to an internal level without skipping input
data due to insufficient vertical resolution or insufficient number of data levels. Alternatively, users can define these
vertical levels by providing an input file called ZSG_LEVS.IN in the working directory.

ZSG_LEVS.IN should be formatted as follows:

{number of internal model levels} {index of top of surface layer} {model top [m AGL]}
END HEADER
{top height m AGL of level 1}
{top height m AGL of level 2}
{...}
{top height m AGL of top level}

An example ZSG_LEVS.IN with 19 vertical levels, the surface layer extending from the surface to the top of the second
internal model level at 75m AGL, and a model top of 10360.0 m AGL is formatted as follows:

19 2 10360.0
ENDHEADER
10.0000
75.0000
200.000
385.000
630.000
935.000
1300.00
1725.00
2210.00
2755.00
3360.00
4025.00
4750.00
5535.00
6380.00
7285.00
8250.00
9275.00
10360.0

Table of Contents
Advanced / Configuration Setup / Namelist Variables
This section provides a summary of the SETUP.CFG namelist file parameters required in configuring the model to perform certain specialized calculations. The default value is shown in bold with
each variable and a link to the User's Guide page with a more detailed discussion of the options. The namelist options that only affect trajectories are shown in italics.

CAPEMIN=-1 -1 no convection; -2 Grell convection scheme; -3 extreme convection; >0 enhanced vertical mixing when CAPE exceeds this value (J/kg)
AREA=0 set emission area shape square (0) or circle (1)
CMASS=0 compute grid concentrations (0) or grid mass (1)
CMTFN=' ' center-of-mass trajectory output file name
CONAGE=24 particle to or from puff conversions at conage (hours)
CPACK=1 binary concentration packing 0:none 1:nonzero 2:points 3:polar
DELT=0.0 integration time step (0=autoset GT-0=constant LT-0=minimum)
DXF=1.0 horizontal X-grid adjustment factor for ensemble
DYF=1.0 horizontal Y-grid adjustment factor for ensemble
DZF=0.01 vertical (0.01 ~ 250m) factor for ensemble
EFILE=' ' temporal emission file name
FRHMAX=3.0 maximum value for the horizontal rounding parameter
FRHS=1.00 standard horizontal puff rounding fraction for merge
FRME=0.10 mass rounding fraction for enhanced merging
FRMR=0.0 mass removal fraction during enhanced merging
FRTS=0.10 temporal puff rounding fraction
FRVS=0.01 vertical puff rounding fraction
GEMAGE=48 particle or puff transfer to the global model after gemage (hours)
HSCALE=10800.0 horizontal Lagrangian time scale (sec)
ICHEM=0 chemistry conversion modules 0:none 1:matrix 2:convert 3:dust ...
IDSP=1 particle dispersion scheme 1:HYSPLIT 2:STILT
INITD=0 initial distribution, particle, puff, or combination
IVMAX=20 number of variables written to PARTICLE_STILT.DAT. Must equal the number of variables listed for variable VARSIWANT
K10M=1 use surface 10m winds / 2m temperature (1) or skip (0)
KAGL=1 trajectory output heights are written as AGL (1) or as MSL (0)
KBLS=1 boundary layer stability derived from 1:fluxes 2:wind_temperature
KBLT=2 boundary layer turbulence parameterizations 1:Beljaars 2:Kanthar 3:TKE 4:Measured 5:Hanna
KDEF=0 horizontal turbulence 0=vertical 1=deformation
KHINP=0 when non-zero sets the age (h) for particles read from PINPF
KHMAX=9999 maximum duration (h) for a particle or trajectory
KMIXD=0 mixed layer obtained from 0:input 1:temperature 2:TKE 3:modified Richardson #
KMIX0=150 minimum mixing depth
KMSL=0 starting heights default to AGL=0 or MSL=1
KPUFF=0 horizontal puff dispersion linear (0) or empirical (1) growth
KRAND=0 method to calculate random number 0=precompute if NUMPAR greater than 5000 or dynamic if NUMPAR less than or equal to 5000; 1=precalculated; 2=calculated in pardsp;
3=none; 4=random initial seed number and calculated in pardsp; 10=same as 0 with random initial seed for non-dispersion applications; 11=same as 1 with random initial seed for non-dispersion
applications; 12=same as 2 with random initial seed for non-dispersion applications; 13=same as 3 with random initial seed for non-dispersion applications
KRND=6 enhanced merge interval (hours)
KSPL=1 standard splitting interval (hours)
KWET=0 precipitation from an external file
KZMIX=0 vertical mixing adjustments: 0=none 1=PBL-average 2=scale_TVMIX
MAXDIM=1 maximum number of pollutants to carry on one particle
MAXPAR=10000 maximum number of particles carried in simulation
MESSG='MESSAGE' diagnostic message file base name
MGMIN=10 minimum meteorological subgrid size
MHRS=9999 trajectory restart duration limit (hours)
NBPTYP=1 number of redistributed particle size bins per pollutant type
NINIT=1 particle initialization (0-none; 1-once; 2-add; 3-replace)
NCYCL=0 pardump output cycle time
NDUMP=0 dump particles to/from file 0-none or nhrs-output interval
NSTR=0 trajectory restart time interval in hours
NTURB=0 Turbulence flag 0-on; 1-off
NUMPAR=2500 number of puffs or particles to released per cycle
NVER=0 trajectory vertical split number
P10F=1.0 dust threshold velocity sensitivity factor
PINBC=' ' particle input file name for time-varying boundary conditions
PINPF=' ' particle input file name for initialization or boundary conditions
PLRISE=1 plume rise option 1-Briggs; 2-Sofiev; 3-Freitas used when heat/fire radiative power is nonzero in emissions input file
POUTF=' ' particle output file name
QCYCLE=0.0 optional cycling of emissions (hours)
RHB=80 is the RH defining the base of a cloud
RHT=60 is the RH defining the top of a cloud
SEED=-1 set the seed for the random number generator. (v988 and higher)
SPLITF=1.0 automatic size adjustment (<0=disable) factor for horizontal splitting
TERRFLG=0 If terrain not available in meteological file estimate from surface pressure (0) or terrain.asc file (1).
TKERD=0.18 unstable turbulent kinetic energy ratio = w'2/(u'2+v'2)
TKERN=0.18 stable turbulent kinetic energy ratio
TLFRAC=0.1 The fraction of the Lagrangian vertical time scale used to calculate the dispersion time step in the STILT dispersion scheme.
TM_PRES=0 trajectory diagnostic output pressure variable marker flag
TM_TPOT=0 trajectory diagnostic output potential temperature
TM_TAMB=0 trajectory diagnostic output ambient temperature
TM_RAIN=0 trajectory diagnostic output rainfall rate
TM_MIXD=0 trajectory diagnostic output mixed layer depth
TM_TERR=0 trajectory diagnostic output terrain height
TM_DSWF=0 trajectory diagnostic output downward short-wave flux
TM_RELH=0 trajectory diagnostic output relative humidity
TOUT=60 trajectory output interval in minutes
TRATIO=0.75 advection stability ratio
TVMIX=1.0 vertical mixing scale factor
VARSIWANT='TIME','INDX','LONG','LATI','ZAGL','SIGW','TLGR','ZSFC','TEMP','SAMT','FOOT','SHTF','DMAS','DENS','RHFR','SPHU','DSWF','WOUT','MLHT','PRES'
variables written to PARTICLE_STILT.DAT.
VBUG=0 insect additional horizontal speed (m/s) added to trajectories
VDIST='VMSDIST' diagnostic file vertical mass distribution
VEGHT=0.5 Height below which particle's time is spent is tallied to calculate footprint for PARTICLE_STILT.DAT less than or equal to 1.0: fraction of PBL height; greater than 1.0: height
AGL (m)
VINIT=1 initial turbulent velocity. Draw from distribution (1) set to 0 (0)
VSCALEU=200.0 vertical Lagrangian time scale (sec) for unstable PBL
VSCALES=5.0 vertical Lagrangian time scale (sec) for stable PBL
WBWR=0.0 trajectory fixed rise velocity (m/s) required if vertical motion option = 9
WBWF=0.0 trajectory fixed fall velocity (m/s, >0) required if vertical motion option = 9 or 10
WBBH=0.0 trajectory height at which vertical velocity switches from rise to fall (m) required if vertical motion option = 9
WVERT=.FALSE. Vertical interpolation scheme for WRF fields .FALSE.:HYSPLIT scheme .TRUE.:WRF scheme

Table of Contents
Advanced / Configuration Setup / Time Step Selection Criteria
Set Ratio - TRATIO (0.75) - defines the fraction of a grid cell that a particle or trajectory is permitted to transit in one
advection time step. Reducing this value will reduce the time step and increase computational times. Smaller time steps
result in less integration error. Integration errors can be estimated by computing a backward trajectory from the forward
trajectory end position and computing the ratio of the distance between that endpoint and the original starting point
divided by the total forward and backward trajectory distance.

Set Value - DELT (0.0) - is used to set the integration time step to a fixed value in minutes from 1 to 60. It should be
evenly divisible into 60. The default value of zero causes the program to compute the time step each hour according to
the maximum wind speed, meteorological and concentration grid spacing, and the TRATIO parameter. The fixed time
step option should only be used when strong winds in regions not relevant to the dispersion simulation (perhaps the
upper troposphere) are causing the model to run with small time steps. Improper specification of the time step could
cause aliasing errors in advection and underestimation of air concentrations. An alternate approach is to set the a
negative value which will result in that value (abs) being used as the minimum time step.

Table of Contents
Advanced / Configuration Setup / Define subgrid and MSL/AGL units
Meteorological Sub-grid Size

MGMIN (10) - is the minimum size in grid units of the meteorological sub-grid. The sub-grid is set dynamically during
the calculation and depends upon the horizontal distribution of end-points and the wind speed. Larger sub-grids than
necessary will slow down the calculation by forcing the processing of meteorological data in regions where no transport
or dispersion calculations are being performed. In some situations, such as when the computation is between
meteorological data files that have no temporal overlap or insufficient spatial overlap, the model may try to reload
meteorological data with a new sub-grid. This will result in a fatal error. One solution to this error would be to increase
the minimum grid size larger than the meteorological grid to force a full-grid data load.

Height Unit for Input

KMSL (0) - sets the default for input heights to be relative to the terrain height of the meteorological model. Hence input
heights are specified as AGL. Setting this parameter to "1" forces the model to subtract the local terrain height from
source input heights before further processing. Hence input heights should be specified as relative to Mean Sea Level
(MSL). In concentration simulations, the MSL option also forces the vertical concentration grid heights to be considered
relative to mean sea level. The special option (xBL) sets KMSL to 2 and treats the input height as a fraction of the
boundary layer (or mixed layer) depth at the trajectory starting location and time. This option is not valid with any
multiple trajectory in time configurations or any of the concentration-dispersion calculations. Valid starting heights can
be defined as any non-zero fraction less than 2.0.

Height Unit for Output

The namelist variable KAGL (=1) only applies to trajectories, not concentration simulations. For trajectory simulations,
the default option is to write the heights as AGL (KAGL=1), setting KAGL=0 converts the trajectory output heights to
MSL. The KMSL namelist parameter described above only applies to how heights are interpreted in the CONTROL
file, while KAGL determines how the model heights are written to the output file. This variable is not available through
the GUI.
Table of Contents
Advanced / Configuration Setup / Multiple Trajectories in Time
Restart Interval - NSTR (0) - Hours between trajectory starts for multiple trajectory-in-time simulations. When greater
than zero, new trajectories will be started from the original starting location every NSTR hours from the initial trajectory
starting time. See the special trajectory simulation section for more information.

Number of Multiple Levels - NVER (0) - Number of vertical levels that trajectories are restarted when trajectories for
multiple trajectory-in-time-and-space simulations. When greater than zero, new trajectories will be started at this
number of levels from the endpoint position at the NSTR interval. The level heights are set in the CONTROL file and
must match the number of starting locations. See the special trajectory simulation section for more information.

Trajectory Duration - MHRS (9999) - Sets the maximum temporal trajectory restart period. For instance if you want to
compute one month's worth of two-day trajectories, then the run duration would be 720 hours, MHRS would be set to
672 hours (720-48), and KHMAX (see below) would be set to 48 hours.

Table of Contents
Advanced / Configuration Setup / Trajectory Points Output Frequency
The endpoint write interval TOUT(60) sets the time interval in minutes at which trajectory end-point positions will be
written to the output file. Output intervals of less than 60 minutes can be selected. This will also force the internal time
step to be an even multiple of the output interval.

The maximum trajectory duration KHMAX (9999) is the maximum duration of any one trajectory in hours.

Table of Contents
Advanced / Configuration Setup / Add Meteorology Output Along
Trajectory
Sets the option to write the value of certain meteorological variables along the trajectory to the trajectory output file.
The marker variables are set to (1) to turn on the option. Multiple variables may be selected for simultaneous output but
only one variable may be plotted. If multiple variables are selected in conjunction with the trajectory display option,
then only the last variable output will be shown in the graphic. The variable output order is fixed in the program and
cannot be changed.

Potential Temperature in degrees Kelvin TM_TPOT (0|1)


Ambient Temperature in degrees Kelvin TM_TAMB (0|1)
Precipitation rainfall in mm per hour TM_RAIN (0|1)
Mixing Depth in meters TM_MIXD (0|1)
Relative Humidity in percent TM_RELH (0|1)
Note: If relative humidity needs to be calculated by HYSPLIT from specific humidity, it may differ from relative
humidity shown for other data sets as it may be calculated with respect to water, not ice at low temperatures, such
is the case for the 0.5 degree GDAS data set.
Specific Humidity in g/(kg air) TM_SPHU (0|1)
Water Vapor Mixing Ratio in g/(kg dry-air) TM_MIXR (0|1)
Solar Radiation downward solar radiation flux in watts per square meter TM_DSWF (0|1)
Terrain Height in meters required for the trajectory plot to show underlying terrain TM_TERR (0|1)

Table of Contents
Advanced / Configuration Setup / Meteorological Grid Offset Ensemble
Sets the dimensions at which the meteorological grid will be offset for the ensemble calculation. Only one offset (+ or -)
in either X, Y, or Z is applied per member.

Horizontal Shift

Starting and ending member numbers can be selected where:

the range 1-9 applies for no vertical shift,


the range 10-18 applies for a positive vertical shift,
the range 19-27 applies to a negative vertical shift,
and DZF (0.01) is the vertical grid offset factor (0.01 ~ 250m).

Horizontal Shift

DXF (1.0) is the west to east grid factor for offsetting the meteorological grid in the ensemble calculation.
DYF (1.0) is the south to north grid factor for offsetting the meteorological grid in the ensemble calculation

Table of Contents
Advanced / Configuration Setup / Mixing Depth Computation Method
Permits the selection of the method used to compute the mixed layer depth. The mixed layer depth does not affect the
trajectory computation, only the dispersion and resulting air concentrations by changing the rate of dispersion as well as
providing a lid to vertical mixing of pollutants. Options include the following:

Use meteorological model - uses the mixed layer depth value from the meteorological model. Not all model
outputs provide mixed layer depths. Valid field IDs are MXHT, HPBL, and PBLH.
From temperature profile - computes the height of the mixed layer as the height at which the potential temperature
is at least two degrees greater than the minimum potential temperature. The computation is made from the top
down.
Compute from TKE profile - uses the meteorological model's TKE profile to estimate the mixing depth at the
height at which the TKE either decreases by a factor of two or falls to a value of less than 0.21. Not all model
outputs contain the TKE field.
Compute using a modified Richardson number approach that includes excess temperature for convective cases.
Set a constant value (m) - as a last resort the mixing depth can also be set to a constant value (meters). The kmixd
namelist variable either contains the mixing depth value or the integers 0,1,2 indicating one of the other choices.

Table of Contents
Advanced / Configuration Setup / Sub-grid Size and Vertical Coordinate
Meteorological Sub-grid Size

MGMIN (10) is the minimum size in grid units of the meteorological sub-grid. The sub-grid is set dynamically during
the calculation and depends upon the horizontal distribution of end-points and the wind speed. Larger sub-grids than
necessary will slow down the calculation by forcing the processing of meteorological data in regions where no transport
or dispersion calculations are being performed. In some situations, such as when the computation is between
meteorological data files that have no temporal overlap or insufficient spatial overlap, the model may try to reload
meteorological data with a new sub-grid. This will result in a fatal error. One solution to this error would be to increase
the minimum grid size larger than the meteorological grid to force a full-grid data load.

Vertical Grid Coordinate System

KMSL (0) sets the default for input heights to be relative to the terrain height of the meteorological model. Hence input
heights are specified as AGL. Setting this parameter to "1" forces the model to subtract the local terrain height from
source input heights before further processing. Hence input heights should be specified as relative to Mean Sea Level
(MSL). In concentration simulations, the MSL option also forces the vertical concentration grid heights to be considered
relative to mean sea level. The option to set the release height dynamically as a fraction of the mixed layer depth is not
available for concentration-dispersion simulations.

Table of Contents
Advanced / Configuration Setup / Release Particles or Puffs
The menu is divided into three sections. In the first two the value of the INITD namelist parameter is being set. In the
upper portion of the menu, the model is configured as either a full 3D particle or puff model, or some hybrid
combination of the two. The released particles or puffs maintain their mode for the entire duration of the simulation.
Valid options are:

0 - 3D particle horizontal and vertical (DEFAULT)


1 - Gaussian-horizontal and Top-Hat vertical puff (Gh-THv)
2 - Top-Hat-horizontal and vertical puff (THh-THv)
3 - Gaussian-horizontal puff and vertical particle distribution (Gh-Pv)
4 - Top-Hat-horizontal puff and vertical particle distribution (THh-Pv)

Introduced with the September 2004 version are mixed mode model calculations, where the mode can change during
transport depending upon the age (from release) of the particle. A mixed-mode may be selected to take advantage of the
more accurate representation of the 3D particle or Gaussian approach near the source and the smoother horizontal
distribution provided by one of the hybrid puff approaches at the longer transport distances. Valid options are:

103 - 3D particle (#0) converts to Gh-Pv (#3)


104 - 3D particle (#0) converts to THh-Pv (#4)
130 - Gh-Pv (#3) converts to 3D particle (#0)
140 - THh-Pv (#4) converts to 3D particle (#0)

Options 130 and 140 are the inverse of 103 and 104, respectively. Selecting either one starts the simulation as puffs (#3
or #4) and after CONAGE hours they convert to a 3D particle (#0). While the particle to puff conversion is one-to-one,
one particle is converted to one puff, the puff to particle conversion results in one puff splitting into multiple particles.
The exact number will vary depending upon the available array space as defined by the value of MAXPAR. Puff splitting
can only fill up half of the available array space. Therefore, if insufficient space has been pre-allocated, eventually
splitting will be turned off and the puff to particle conversion will become one-to-one. The number of splits per puff is
defined as 0.5 times the array size MAXPAR divided by the number of puffs to be emitted over the duration of the
simulation and where the number of puffs is computed by NUMPAR*NHRS/QCYCLE. The split rate is computed at
each entry to the program (hourly) and may be turned back on if sufficient array space again becomes available.

An example of the conversion (130) after 6 hours transport of a single Gaussian puff to multiple particles is shown
below as a snapshot at 3 hours and 6 hours after release. On the left, the puff was converted to 50 particles
(numpar=100) and on the right to 5000 particles (numpar=10000).
An option introduced with the January 2009 version (4.9), converts puffs or particles to the Global Eulerian Model grid.
The mass is transferred to the global grid after the specified number of hours (gemage). This approach should only be
used for very long-range (hemispheric) transport due to the artificial diffusion introduced when converting pollutant
plumes to a gridded advection-diffusion computational approach. The method is ideal for estimating contributions to
background concentrations. All mixed- mode particles/puffs (not just 3D) will convert to the global grid if the global
option is selected from the special runs menu.

The third section defines the 3-D particle dispersion algorithm. The default is to use the HYSPLIT dispersion scheme.
The STILT scheme was introduced in 2020. The STILT scheme is more complex module that includes a
reflection/transmission scheme that preserves a well-mixed distribution of particles as they move vertically across
model layer interfaces. The STILT scheme is computationally more expensive to run.

Table of Contents
Advanced / Configuration Setup / Particle-Puff Release Number Limits
The number of particles released per cycle NUMPAR (2500) would be the maximum number of particles or puffs
released over the duration of the emission. NUMPAR has a different meaning for puff and particle simulations. In a full
puff simulation (INITD = 1 or 2), only one puff per time step is released, regardless of the value of NUMPAR. In a
particle or mixed particle-puff simulation (INITD = 0, 3, 4), NUMPAR represents the total number of particles that are
released during one release cycle. Multiple release cycles cannot produce more than MAXPAR number of particles. For
a mixed simulation (particle-puff), NUMPAR should be greater than one but does not need to be anything close to what
is required for a full 3D particle simulation.

To simplify certain simulations when a constant particle number release rate is required, specifying a negative value for
NUMPAR will over-ride the particle number release rate calculations (in terms of the number of sources, hours
emission, and pollutants) and force the particle release rate to be |NUMPAR| particles per hour for each source and
pollutant.

The maximum number of particles MAXPAR (10000) is the maximum number permitted to be carried at any time
during a simulation. In all simulation types, particle or puffs are only emitted if the particle count is less than MAXPAR.
Note that there are situations where NUMPAR can (and should) exceed MAXPAR because the actual particle release rate
is computed by dividing NUMPAR by the number of sources, pollutants, and release hours.

The maximum particle duration KHMAX (9999) is the number of hours after release that a particle is dropped from the
simulation. For simulations using regional meteorological grids, particles are dropped when the reach the grid boundary.
However, when using global meteorological data, it may be computationally prudent to drop particles after they are no
longer over the region of interest.

Table of Contents
Advanced / Configuration Setup / Emission Cycling and Point Source File
Emission Cycling

QCYCLE (0.0) are the number of hours between emission start cycles. A zero value means that the emissions are not
cycled. When non-zero, the number of emission hours is repeated again at QCYCLE hours after the starting emission
time specified in the input CONTROL file. The NUMPAR parameter set in the release number limits menu is defined as
the number of particles released over the emission duration per cycle unless its value is less than zero, in that case it
defines the hourly particle number release rate directly as |NUMPAR|. The QCYCLE parameter is also used in
conjunction with namelist option ICHEM=10 to that the concentrations from each release cycle are saved in their own
concentration array.

Optional Point-Source Emission File

EFILE (undefined) is the file name that contains point-source temporal emission factors, where each record contains the
{year month day hour duration latitude longitude emission-rate} for each emission period. Multiple emission periods
can be defined. The values in this file replace the values set in the CONTROL file. The emission file can be created from
the Advanced Menu tab. The format of this file is more complex than the one used to multiply time-varying unit-source
dispersion factors created by the ICHEM=10 option described above.

Table of Contents
Advanced / Configuration Setup / Configure Turbulence-Dispersion
Computation
Vertical Turbulence

KBLT is a flag used to set the vertical turbulence computational method, that is how the turbulent velocity variances are
computed from either the heat and momentum fluxes or the model profiles of wind and temperature. Three different
computational approaches (Beljaars/Holtslag, Kanthar/Clayson, Hanna - see the technical documentation for details) are
defined. Another option is the use the TKE (Turbulent Kinetic Energy) output from the meteorological model provided
in the input meteorological data file. Not all model data contain the TKE field. Another option is a special case where
the input meteorological data are assumed to contain the 3-dimensional component velocity variance fields, usually a
measured component.

1 - Beljaars/Holtslag and Betchov/Yaglom


2 - Kanthar/Clayson (DEFAULT)
3 - TKE field from the input meteorology data file
4 - Measured velocity variances from the input meteorology
5 - Hanna

Horizontal Turbulence

KDEF defines the way the horizontal turbulence is computed. The default approach is to compute the horizontal mixing
in proportion to the vertical mixing using one of the methods defined above (see the technical documentation for
details). The original computation was to compute the mixing from the deformation of the horizontal wind field. The
limitation of this method is that for shorter-range dispersion simulations (<100 km) the deformation parameterization
used in conjunction with larger scale meteorological fields will not reflect the diurnal variations in horizontal
turbulence. Using diurnal sensitive methods will not effect longer-range calculations because the particles are
distributed over many meteorological grid cells where variations in the transport vector dominate the horizontal
dispersion process.

0 - In proportion to the vertical turbulence (DEFAULT)


1 - Computed from the velocity deformation

Boundary Layer Stability

KBLS defines how the stability is computed. Normally when turbulent fluxes (heat and momentum) are available from
the meteorological data file, they are used to compute stability. Sometimes it may be desirable to force the stability to be
computed from the wind and temperature profiles, especially if the fluxes represent long-time period averages rather
than instantaneous values. If fluxes are not present, the profiles are used for the stability computation.

1 - Heat and momentum fluxes (DEFAULT)


2 - Wind and temperature profiles

Vertical and Horizontal Lagrangian Timescales

?SCALE? defines the time in seconds that the turbulence is no longer autocorrelated. These values are used to convert
various mixing coefficients into turbulent velocities. Different values are defined for vertical and horizontal
components. The horizontal Lagrangian time scale default value is about equal to 1/f, the Coriolis parameter. Also the
vertical time scale is further divided into stable and unstable conditions. The Lagrangian time scale also controls the
transition of the dispersion rate from linear to square root growth as a function of time. Setting the stable vertical
Lagrangian time scale (VSCALES) to -1 will result in the Hanna vertical Lagrangian Time Scale to be used, which
varies in space and time and is calculated based on the vertical velocity variance estimated in the vertical turbulence
scheme. Setting VSCALES=0 will result in the Hanna Lagrangian Time Scale to be used (VSCALES=-1) if HYSPLIT
is run in STILT mode (ichem=8), otherwise VSCALES will be set to 5.0. When the Hannal Lagrangian time scale is set,
the unstable vertical Lagrangian time scale (VSCALEU) is not used.

200.0 = VSCALEU vertical Lagrangian time scale (sec) for unstable PBL
5.0 = VSCALES vertical Lagrangian time scale (sec) for stable PBL
10800.0 = HSCALE horizontal Lagrangian time scale (sec)

Vertical Mixing Profile

KZMIX determines if any additional processing is to be performed on the vertical mixing profile. The current default is
for no adjustments. In previous versions the boundary layer mixing profile was replaced with its average value. This
compensated for some meteorological data sets with poor vertical data resolution that might result in particles being
trapped near the surface due to insufficient mixing. The last two options are scale factors that can be applied to the
mixing coefficients and currently are not available for modification through the GUI.

0 - NONE vertical diffusivity in PBL varies with height (DEFAULT)


1 - Vertical diffusivity in PBL single average value
2 - scale boundary-layer values multiplied by TVMIX
3 - scale free-troposphere values multiplied by TVMIX

Mixed Layer Depth Computation

KMIXD is used to control how the boundary layer depth is computed. In addition to acting as a vertical lid to particle
dispersion (advection is not affected), the mixed layer depth is also used to scale the boundary layer mixing coefficients
and computing turbulent fluxes from wind and temperature profiles. The default is to use the value provided by the
meteorological model through the input data set (KMIXD=0). If those are not available, the computation will use the
temperature profiles (KMIXD=1). If HYSPLIT is run in STILT mode (ICHEM=8), then the default is to use a modified
Richardson # approach (KMIXD=3) that includes excess temperature for convective cases for estimating the mixed
layer depth. See the technical document for details on these schemes. Setting KMIXD in the SETUP.CFG will over-ride
the defaults. Another option to use for testing is to replace the index value (0,1,2,3) with a value greater than 10. In this
situation, that value will be used as the mixed layer depth and will be constant for the duration of the simulation.

0 = Use meteorological model MIXD if available (DEFAULT)


1 = Compute from the temperature profile
2 = Compute from the TKE profile. (Requires that TKE be available.)
3 = Compute from modified Richardson number (STILT MODE DEFAULT) (new in hysplit.v5.0.0)
> = 10 use this value as a constant

KMIX0 is a related parameter that sets the minimum mixing depth. The default value is 150 meters and is related to the
typical vertical resolution of the meteorological data. A resolution near the surface of 15 hPa is typical of pressure-level
data files. This suggests that it is difficult to infer a mixed layer depth of less than 150 m (10 m per hPa) for most
meteorological input data.

150 = The minimum mixing depth (DEFAULT)

Puff Growth Computation Method

KPUFF is the flag to use either the linear with time or the empirical fit with time dispersion equations for the horizontal
growth rate of puffs. This parameter does not affect particle dispersion. The linear with time approach suggests that not
all turbulent scales have been sampled and puffs grow in proportion to increasing time. The empirical fit equation is
similar but the rate of puff growth decreases with time. Slower puff growth rate in the linear approach is represented by
the separation of puffs after splitting due to variations in the flow. The empirical approach should only be used in those
situations where puff splitting is constrained because of memory or computing time limitations.

0 = Linear with time puff growth (DEFAULT)


1 = Empirical fit to the puff growth

Turbulence Anisotropy Factors

TKERD and TKERN are the ratios of the vertical to the horizontal turbulence for daytime and nighttime, respectively.
TKER{D|N} is defined as W'2/(U'2+V'2). A zero value forces the model to compute a TKE ratio consistent with its
turbulence parameterization. A non-zero value forces the vertical and horizontal values derived from the TKE to match
the specified ratio. This option is only valid with KBLT=3. The Urban button increases the internal TKE by 40% and
slightly raises the nighttime ratio to account for enhanced turbulence in an urban setting. The landuse file supplied with
the model is not of sufficient resolution to define urban areas and hence the urban setting applies to all points in the
computational domain.

0.18 = TKERD day (unstable) turbulent kinetic energy ratio


0.18 = TKERN night (stable) turbulent kinetic energy ratio
Other Turbulence Parameters Not Defined in GUI

1.0 = TVMIX vertical mixing scale factor (in conjunction with KZMIX
1 = KRAND method to calculate random number (0=precompute if NUMPAR greater than 5000 or dynamic if
NUMPAR less than or equal to 5000; 1=precompute; 2=dynamic; 3=none; 4=random initial seed number and
dynamic; 10=same as 0 with random initial seed for non-dispersion applications; 11=same as 1 with random
initial seed for non-dispersion applications; 12=same as 2 with random initial seed for non-dispersion
applications; 13=same as 3 with random initial seed for non-dispersion applications)
Table of Contents
Advanced / Configuration Setup / Concentration Packing and Output Units
Concentration Packing Method

CPACK is the flag to turn off (set to 0) concentration output packing. The default is to write the binary concentration
file at only those grid points that have a non-zero concentration value (set to 1). Setting the flag to zero results in the
output of the entire concentration grid. Due to the nature of the packing method, if the plume covers more than 50% of
the concentration grid, the default concentration packing will result in larger output file than an unpacked concentration
file. After selecting a new packing value, use the Update button to change its value for the selected grid. The default
menu is shown below.

The Point option (CPACK=2) is a special feature that forces the concentration grid to have a size of one grid point over
the location of the center point. It is a way to define a single sampling location.

In the case where CPACK=3 defines a polar concentration grid defined by radial sectors centered about the origin
latitude-longitude position. In this case the grid spacing and span defined in the CONTROL file define the arc sector
size (latitude values) and the sector distances (longitude values). Polar concentration grids may be displayed using the
program poleplot. Note that because of way a particle's position on the grid is computed, it is not possible to define a
polar concentration grid spanning the dateline.

If there is more than once concentration grid, the selected packing method will be applied to all concentration grids.
However, unlike any of the other namelist variables, CPACK is dimensioned so that a different value can be set for each
grid. For instance, in the example shown below, the model was configured with two concentration grids, the first with
no packing, and the second grid defined as a polar grid. If more than two concentration grids exist, but CPACK was
defined for only the first two grids, as in this example, all subsequent grids would be defined as rectangular grids and
use the default packing method CPACK=1. Commas should terminate each CPACK value.
Concentration or Mass Output Units

The default option (CMASS=0) is to output units of mass/volume. However, if mass-only output is desired, then setting
CMASS=1 will cause the model not to divide the grid cell sum by its volume.

Table of Contents
Advanced / Configuration Setup / Input and Output of Particle Files
Particle File Initialization Options

NINIT sets the type of initialization performed. There are two types of initializations. One occurs at model startup, prior
to the start of the calculations. This sets the initial conditions for the calculation, any particles on the computational grid
at model startup are loaded. In the second situation, during the model simulation, particles are loaded each time step, if a
matching time is found in the boundary condition file. These particles may represent emissions from a previous
calculation and can be added to or replace the existing particles in the simulation. When NINIT is set to "0" no particle
initialization occurs even if the input file is defined. A value of "1" reads the file only during the initialization process.
No initialization will occur if the time of the particle dump does not match the time of the model initialization. A value
of "2" will check the file each hour during the simulation, and if the times match, the particles will be added to those
already contained in the simulation. A value of "3" is similar to the previous case, except the particles in the file replace
all the particles in the simulation.

PINPF sets the default name PARINIT for the particle input file that can be used for initialization or boundary
conditions. Note that particle files are just a dump of all the pollutants tracked by the model at a particular time and the
file can consist of either puffs, particles, or a combination of both.

PINBC with a default name PARINBC is a special file that can be used for boundary conditions while PINPF can be
used for initial conditions or boundary conditions. Setting the PINBC file name is not available through the GUI. For
instance, PINPF can be used for both initial (NINIT=1) and boundary conditions (NINIT>1), but when PINBC is also
defined, then it would replace PINPF after the initial input time.

Also not yet available through the GUI, the namelist variable KHINP<>0 sets the particle age that will be read from the
particle initialization file that may contain particles of many different ages each output time period. This option is
intended to be used with continuous initialization (NINIT=2). An example of this application may be to create a particle
file using high resolution regional meteorology, but only for the first few hours of transport. In the case of continuous
emissions, this file would be read each time period during the coarser grid simulation, but only initializing with particles
that are of age = KHINP. The final output requires that the concentration grids from the two simulations be added
together.

Also note that when a particle file is used for initialization of a simulation, the meteorological grid identification of a
particle is retained from the previous simulation (except when KHINP<>0). This means that if different meteorological
input files are defined in the two simulations, a particle on the second simulation may not be defined on the intended
meteorological grid at the start of the calculation.

Particle File Output Options

POUTF sets the name for the particle dump output file. PARDUMP is the default.

NDUMP can be set to dump out all the particle/puff points at selected time intervals to a file called PARDUMP. This
file can be read from the root directory at the start of a new simulation to continue the previous calculation. NDUMP
and NCYCL control when and how often particle positions are written to the file. They may be [0] or negative or
positive integers. 0 is the default value.

Summary of NDUMP, NCYCL combinations

NDUMP=0, NCYCL=0 (default)


No PARDUMP file written

NDUMP=+M, NCYCL=0
particle positions written at hour M.
NDUMP=+M, NCYCL=+N
Particle positions written first at M, then again at M+1*N, M+2*N, M+3*N...
NCYCL sets the repeat interval at which the PARDUMP file is to be written after the first write at hours NDUMP. For
instance, in a multi-day simulation, one application would be to set NDUMP=24 and NCYCL=24 to output all points at
the end of every simulation day. If the model were to crash unexpectedly, the simulation could be restarted from the last
PARDUMP output.

NDUMP=+M, NCYCL= -N
Particle positions written at simulation start, then again at M+1*N, M+2*N, M+3*N. However, each write over-writes
the previous so only positions at M+tN are saved in the file.
This means that the file will always contain one time period, a more compact file for initialization purposes.

NDUMP=0, NCYCL=+N
Particle positions written at simulation start then again at 1*N, 2*N, 3*N

NDUMP= -M, NCYCL=0


Particle positions written at each time step up to hour M
Note that if NDUMP is negative, the NCYCL field is ignored.

NDUMP=0, NCYCL= -N
Particle positions written at simulation start and each time step up to hour N

Although not yet available through the GUI, there is a command line program par2conc that can be used to convert the
particle position file to a binary concentration file.

Table of Contents
Advanced / Configuration Setup / In-Line Conversion Modules
Setting the ICHEM variable to a non-zero value changes the model's internal configuration in terms of how it treats the
pollutants. Some conversion options require additional modules and specific requirements in setting up the CONTROL
file. See the linked discussion under each option for more information.

ICHEM is chemistry module selection index:

0=none (DEFAULT)
1=Restructure the concentration grid to a source-receptor format
2=Convert species #1 to species #2 at [fraction] per hour
3=Enable the PM10 dust storm emission algorithm
4=Force concentration grid to be identical to meteorology grid
5=Deposit particles rather than reducing their mass
6=Divide output mass by air density (kg/m3) to sum as mixing ratio
7=Transport deposited particles on the ocean surface
8=STILT mode: mixing ratio (#6) and varying layer (#9)
9=Set concentration layer one to a fraction of the boundary layer
10=Restructure concentration grid into time-varying transfer matrix
11=Enable daughter product calculation

Additional Chemistry Options

MAXDIM is the maximum number of pollutants that can be attached to one particle. Otherwise, if multiple pollutants
are defined they are released as individual particles. This feature is usually required with chemical conversion modules,
which are not part of the standard distribution. However, the species 1->2 conversion option will automatically set
MAXDIM=2. Both the standard point source and emissions file input routines are enabled to release multiple species on
the same particle. In addition, for the daughter product calculation MAXDIM needs to be set equal to the number of
daughter nuclides. Setting MAXDIM greater than one forces all the species on the same particle, with the limitation that
the number of defined pollutants must equal MAXDIM. No other combinations are permitted. The advantage of this
approach is that not as many particles are required for the calculation as in the situation where every pollutant is on its
own particle. The limitation is that each species on the particle should have comparable dynamic characteristics, such
that they are all gases or have similar settling velocities; otherwise the transport and dispersion simulation would not be
very realistic. In the case of multiple species defined on a single particle, the gravitational settling routine will use the
characteristics of the last defined pollutant to compute settling for that particle. Deposition, decay, and other removal
processes are still handled for independently for each pollutant on that particle.

Table of Contents
Advanced / Special Topics / Puff Split-Merge Issues
When the model is run in 3D particle mode a fixed number of particles are released and followed for the duration of the
computational period. A sufficient number of particles need to be released so that at the end of the simulation, after
particles have spread out, adjacent concentration grid cells, have enough particles to be able to properly represent the
concentration gradients. For very long duration simulations, a large particle number may be required and the
computational times may become prohibitive. One compromise is to use one of the hybrid particle-puff combinations.
Fewer number of puffs need to be released because as they grow to the size of the meteorological grid, they will split
into multiple particles. To avoid the particle-puff number quickly exceed the computational array limits, puffs
occupying the same location may be merged. There are several different parameter settings that control the splitting and
merging. These are discussed in more detail in this section.

Basic Namelist Parameters

The namelist parameters KSPL, FRHS, FRVS, FRTS, KRND, FRMR, FRME control the split-merge routines. Normally
these should all be left at their default values. Split routines are called at KSPL intervals and merging is always called
hourly. Merging is most sensitive to the horizontal parameter FRHS. When going from the default value of 1 to 4 almost
all the puffs are merged once FRHS=4. Because merging is called after splitting, most puffs that are merged are already
in the same vertical position; hence there is little sensitivity to the vertical parameter FRVS. The time parameter FRTS
only matters when there are continuous emissions. KRND controls the interval at which the enhanced merging routines
are invoked. Enhanced merging is similar to standard merging except the parameters are 50% larger and selectively
applied to those puffs at the lower end of the mass range as defined by FRME. Its default value of 0.10 means that only
puffs whose total mass only represents 0.10 of the mass of all puffs will be subjected to enhanced merging.

KSPL (1) is the interval in hours at which the puff splitting routines are called.
FRHS (1.0) is the horizontal distance between puffs in sigma units that puffs have to be within each other to be
permitted to merge.
FRVS (0.01) is the vertical distance between puffs in sigma units that puffs have to be within each other to be
permitted to merge. This parameter only applies to 3d puff simulations.
FRTS (0.10) is the fractional difference in the age that puffs have to be within each other to be permitted to
merge.
KRND (6) is the interval in hours at which enhanced puff merging takes place. Enhanced merging is less
restrictive (FRHS, FRVS, FRTS are 50% larger) and will degrade the accuracy of the simulation. Puffs can be
further apart and still be merged into the same position. Enhanced merging only occurs when the puff number
exceeds 25% of MAXPAR.
FRME (0.10) is the fraction of the total mass that represents a puff mass at which all puffs with a mass less that
puff value will only account for FRME of the total mass. These "Low Mass" puffs will be subject to enhanced
merging.
FRMR (0.0) is the fraction of the mass that is permitted to be removed at KRND intervals. The normal situation is
to permit no mass loss. However for certain simulations, such as when a pollutant has a high ambient background
relative to a typical plume signal, a small removal rate could significantly reduce the number of puffs on the grid
with no loss in the accuracy of the simulation. This removal procedure can also be specified for 3D particle
simulations.

Automated Split-Merge Procedures

When the puff-particle number approaches the array limits, further splitting is restricted until the merge procedures have
freed up additional array space. Each time splitting shuts down, FRHS is automatically incremented by 0.5 to increase
the effectiveness of puff merging, to a maximum value of 3 (FRHMAX in the namelist). Further, when splitting shuts
down, those remaining puffs that are eligible to split but cannot due to the split restriction are prevented from increasing
in size (both horizontal and vertical) until the split restriction has been removed. Also at the first occurrence of the split
restriction, the size to which a puff is permitted to grow before splitting is increased in proportion to the namelist
parameter SPLITF. Puff splitting occurs when the size of the puff reaches SPLITF x METEOROLOGY_GRID_SIZE, or
concentration grid size, whichever is larger. A termination message to standard output has been added prior to
HYSPLIT completion if puff splitting restrictions are in place at the end of the simulation. Such a message would
suggest that it might be necessary to rerun the simulation with added array space or different merge parameters.

FRHMAX (3.0) is the maximum permissible value for FRHS.


SPLITF (1.0) at the default value of 1.0 is automatically recomputed to be the ratio of the number of
concentration grid cells to the maximum number of particles permitted. For values less than one this feature is
disabled and splitting occurs as before. For values greater than one, that value is used to determine when a puff
splits. The distance adjustment is based upon how many particles are required to cover the concentration grid
(assuming 20% in the PBL). If there are insufficient number of particles, then the puffs are allowed to grow larger
before they split, providing for smoother patterns.

Theoretical Considerations

For example, if a global simulation is required using a 1-degree resolution concentration grid, that results in about
65,000 grid points at the surface. Clearly a very long duration simulation that is expected to spread over much of the
domain will require a comparable number of puffs as grid cells to provide smooth concentration patterns. If we assume
the lowest layer only represents about 10 percent of the volume over which the puffs have been transported and mixed,
it is not unrealistic to expect such a simulation to require 10 times as many puffs. An alternate approach, would be to
dump the puffs into a global grid model rather than splitting them as they grow to the size of the meteorological grid.
Concentrations at any point would then be a sum from the two modeling approaches. This is a variation of a plume-in-
grid model.

Parameter Considerations

Very long duration simulations or simulations using very fine resolution meteorological data, which have an insufficient
initial allocation of the puff array space (MAXPAR in the namelist) can result in split shutdown messages or perhaps
even emission shutdown messages. If any of these occur, the simulation results should be viewed with caution. The
results may be noisy and inaccurate if the emissions (new puffs released) have also been restricted. A simulation with
puff split restrictions may be improved by first increasing the array space to a value that still results in acceptable
simulation times. If not effective, or the CPU times become too long, the second choice could be to increase the
frequency of enhanced merging (perhaps decreasing KRND from 6, to 3, 2, or even 1), and perhaps in combination with
decreasing the split interval (increasing KSPLT from 1 to 3, 6, or 12). Although decreasing the split interval will not be
effective once splitting has shutdown, it may extend the time at which splitting first shuts down. Enhanced merging has
little effect if most of the puffs have the same mass, perhaps because they were released at the same time. It is most
effective for continuous emission simulations, where there is a large range in puff mass due to the different number of
splits each puff has been subjected. An effective removal method is setting FRMR to a non-zero value. This has the
effect of purging the simulation of low-mass puffs and would be most appropriate for continuous emissions simulations,
where the puffs at longer distances have less importance. Used incorrectly, setting this parameter to a non-zero value
can seriously bias the model results. For long-duration continuous emission simulations, it may also be just as effective
to stagger the emissions because it would not be necessary to emit puffs every time step for realistic (and accurate)
results. This could be accomplished by emitting more mass over a shorter duration and then cycling the emissions. For
instance instead of a continuous emission, one could emit 10 times the normal hourly amount over 0.1 hours (6 min) and
the repeat the emission cycle (QCYCLE parameter) each hour. The emission cycle could even be staggered over longer
times.

CPU Time Considerations

Long simulations may result in excessive CPU times because puffs will almost certainly be transported to the upper
regions of the atmosphere where the winds are strongest which results in very small integration time steps. If
computational accuracy in these upper regions is not required, perhaps because the only interest is boundary layer
transport, the time step should be set to a fixed value. Given the same number of puffs, the enhanced merging version of
the model should run substantially faster than the original version because when puff splitting shuts down and puffs
continue to grow, they become quite large and cover many more concentration grid points which must all be sampled.
Unrestricted puff splitting or restricting puff growth avoids this computational problem.

Table of Contents
Advanced / Configuration Setup / Variables Not Set in the GUI
CAPEMIN (-1) - defines the threshold value of CAPE (convective available potential energy J/kg) to initiate enhanced vertical mixing. A value of -1 (default
= -1) skips the CAPE computation such that if CAPE is not available in the input meteorological data file then it is NOT computed. When the value is greater
than zero, CAPE enhanced mixing is turned on and if CAPE is not available in the input data file, it is computed for each grid point, which may substantially
slow down the overall calculation. CAPE enhanced mixing results in particles in the cloud-layer being randomly redistributed within the cloud-layer if they
reside in a grid cell where CAPE exceeds CAPEMIN. A initial value of 500 J/kg is suggested to test this computation. Setting CAPEMIN to -2 will utilize the
Grell convective parameterization if Grell convective fluxes are in the meteorological input files. Setting CAPEMIN to -3 will utilize the extreme convection
parameterization.
DECAY (1) - when (1=default) lets deposited material radioactively decay after accumulation on the deposition grid until it is written to the output file. When
(0) then deposited material does not decay. In both situations, when the radioactive half-life is defined in the CONTROL file, the mass on the particles while
airborne will decay in both cases. This flag only determines what happens after the material is deposited on the grid and before the information is written to
the the output file.
IVMAX (20) Number of variables written to PARTICLE_STILT.DAT when running in STILT mode (ICHEM = 8). Must equal the number of variables listed
for variable VARSIWANT.
K10M (1) - when (1=default) the 10m/2m values for winds and temperatures are used as the lowest data level if available in the meteorological file. When set
to zero, these data values will not be used in any computations.
KAGL (1) - KAGL only applies to trajectories, not concentration simulations. For trajectory simulations, the default option is to write the heights as AGL
(KAGL=1), setting KAGL=0 converts the trajectory output heights to MSL. The KMSL namelist parameter only applies to how heights are interpreted in the
CONTROL file, while KAGL determines how the model heights are written to the output file.
KHINP (0) - when non-zero sets the particle age in hours that will be read from the particle initialization file (PARINIT) that may contain particles of many
different ages each output time period. This option is intended to be used with continuous initialization (NINIT=2) in which the calculation is continued from
a previous simulation in which particles older than KHINP had been terminated by using the namelist variable KHMAX=KHINP. These continuous emission
simulations require particles to be added each time step. This configuration works best when the integration time step is set to the same fixed value for all
simulations.
KRAND (0) - same as KRAND=1 if NUMPAR is greater than 5000 or same as KRAND=2 if NUMPAR is less than or equal to 5000. KRAND=1
precomputes all the random numbers required for the particle turbulence calculations. KRAND=2 computes the random numbers at each dispersion time step.
The dynamic method is slightly more accurate than the precomputed random numbers, but it requires about twice as much CPU time. With large particle
release rates both methods will provide comparable results. The special case of KRAND=3 will result in no turbulent mixing being added to the particle
position; the particle track will follow the mean trajectory. This option is primarily intended for diagnostic testing. The option KRAND=4 follows the STILT
method of randomly selecting an initial seed number for the random number generator and compute these random numbers at each dispersion time step.
KRAND=0, 1, 2, and 3 uses the same initial seed number for non-dispersion applications (i.e., emissions placement, deposition, chemistry) to allow
simulations to be repeatable. KRAND=10, 11, 12, and 13 are the same as KRAND=0, 1, 2, and 3 except a random initial seed number is used for non-
dispersion procedures. KRAND=4 also uses a random initial seed number for non-dispersion procedures.
KWET (1) - when (1=default) the code uses the precipitation values in the defined meteorological input data file. When KWET=2, the code will try to open a
file called raindata.txt which defines the directory, base, and suffix name of an external precipitation file that is in the ARL standard format. The text file also
defines the 4-character name of the precipitation variable as well as the conversion factor from file units to model units (meters/min). The file name is
automatically constructed from the prefix and suffix names using current calculation date following the convention: {prefix}{YYYYMMDD}{suffix}. If the
raindata.txt file is not found in the startup directory, the model will create a template for manual editing and then stop.
MAXDIM (1) - is the maximum number of pollutants that can be attached to one particle. Otherwise, if multiple pollutants are defined they are released as
individual particles. This feature is only required with chemical conversion modules, which are not part of the standard distribution. However, the species 1-
>2 conversion option will automatically set MAXDIM=2, but the standard emissions routine will still put each species on a different particle in its appropriate
array element. The only emissions routine that is enabled to release multiple species on the same particle is the emissions file option. Setting MAXDIM
greater than one forces all the species on the same particle with the limitation that the number of defined pollutants must equal MAXDIM. No other
combinations are permitted. The advantage of this approach is that not as many particles are required for the calculation as in the situation where every
pollutant is on its own particle. The limitation is that each species on the particle should have comparable dynamic characteristics, such that they are all gases
or have similar settling velocities, otherwise the transport and dispersion simulation would not be very realistic.
MESSG (MESSAGE) - is the default base name of the diagnostic MESSAGE file. In normal simulations, the name defaults to the base name. In an ensemble
multi-processor calculation, the message file name would be automatically set to correspond with the process number by appending the process number to the
base name: {base}.{000}, where the process number can be any integer between 001 and 999.
NBPTYP (1) - defines the number of bins assigned to each particle size as defined in the pollutant section of the CONTROL file. The default value of one
uses the input parameters. A value larger than one will create that number of particle size bins centered about each value in the CONTROL file. The program
creates the mass distribution for a range of particle sizes given just a few points within the distribution. We assume that dV/d(log R) is linear between the
defined points for an increasing cumulative mass distribution with respect to particle diameter. The input points in the CONTROL file should be sorted by
increasing particle size within the array. For instance, if the CONTROL file defines 3 particle sizes (5, 20, and 50), and NBPTYP=10, then for the 5 micron
particle generates sizes from 2.3 to 8.1 microns while the 50 micron input particle generates sizes from 30.2 to 68.9 microns.
NTURB (0) Turbulence on: 0; turbulence off: 1
OUTDT (0) - defines the output frequency in minutes of the endpoint positions in the PARTICLE.DAT file when the STILT emulation mode is configured.
The default value of 0 results in output each time step while the positive value gives the output frequency in minutes. A negative value disables output. The
output frequency should be an even multiple of the time step and be evenly divisible into 60. In STILT emulation mode, the time step is forced to one minute.
PINBC - defines the input particle file name (default=PARINBC) that can be used for boundary conditions during the simulation.
PLRISE (1) Brigg's plume rise option (1), Sofiev plume rise option (2), or Freitas plume rise option (3) used when heat/fire radiative power in the emissions
input file is nonzero.
RHB (80) - defines the initial relative humidity required to define the base of a cloud.
RHT (60) - the cloud continues upward until the relative humidity drops below this value. The values represent a compromise between the cases when the
cloud (and precipitation) covers an entire grid cell and those situations where the cloud may only cover a fraction of the meteorological grid cell. For
example, using the cloud fractions as defined in the EDAS (40 km resolution) data, 70% of the time 100% cloud cover occurs when the RH exceeds 80%.
SEED (0) The value of the random seed is set to -1+SEED. Default value of SEED is 0, thus the default of the random seed for the single processor version is
-1.
TLFRAC (0.1) The fraction of the Lagrangian vertical time scale used to calculate the dispersion time step in the STILT dispersion scheme. This variable is
used when the STILT dispersion scheme (IDSP = 2) is selected.
TVMIX - vertical mixing scale factor is applied to vertical mixing coefficients when KZMIX = 2 for boundary layer values and KZMIX = 3 for free-
troposerhere values.
VARSIWANT
('TIME','INDX','LONG','LATI','ZAGL','SIGW','TLGR','ZSFC','TEMP','SAMT','FOOT','SHTF','DMAS','DENS','RHFR','SPHU','DSWF','WOUT','MLHT','PRES')
List of variables written to PARTICLE_STILT.DAT when running in STILT mode (ICHEM = 8). The following variable options are available: TIME (time
since start of simulation), SIGW (standard deviation of vertical velocity), TLGR (Lagrangian vertical time scale), LONG (longitude), LATI (latitude), ZAGL
(height AGL), ZSFC (terrain height), INDX (particle index), ICDX (cloud index: 1=updraft; 2=environment; 3=downdraft), TEMP (temperature at lowest
model layer), TEMZ (temperature), PRES (pressure), SAMT (time below VEGHT), FOOT (footprint), SHTF (sensible heat flux), WHTF (latent heat flux),
TCLD (total cloud cover), DMAS (particle weight change), DENS (air density), RHFR (relative humidity fraction), SPHU (specific humidity), LCLD (low
cloud cover %), ZLOC (limit of convection height), DSWF (downward shortwave radiation), WOUT (vertical mean wind velocity), MLHT (mixed layer
height), RAIN (total rain fall rate m/min), CRAI (convective rain fall rate m/min), ZFX1 (vertical displacement due to convective flux).
VBUG (0) - additional insect horizontal speed (m/s) added to trajectories.
VDIST (VMSDIST) - an output file that contains the hourly vertical mass distribution, similar to the values shown in the MESSAGE file every six hours. The
output can be disabled by setting the default file name to blank or null length. Programs vmsmerge and vmsread are available in the exec directory to view
and process these files.
VEGHT (0.5) Height below which particle's time is spent is tallied to create footprint written to PARTICLE_STILT.DAT when run in STILT mode (ICHEM
= 8). Less than or equal to 1.0: fraction of PBL height; greater than 1.0: height AGL (m).

Table of Contents
Category Listing
Concentration File Conversion Program unix mac win gui
accudiv X X X X
c2array
c2datem add_data X X X
con2arcv add_grid X X X
con2asc add_miss X X X
con2cdf4
con2ctbt add_time X X X
con2dose add_velv X X X
con2grad aer2arl X X X
con2rem
con2srs api2arl X X
con2stn apidump X X
condecay arl2grad X X X
conprob
constnlst arl2meds X X X
lbfgsb arw2arl X X X X
macc2date asc2par X X X
matrix
stat2grid ascii2shp X X X X
statmain boxplots X X X X
tcsolve c2array X X X X
c2datem X X X X
Concentration File Utilities
cat2svg X X X X
con2inv catps2ps X X X X
conappend chk_data X X X
conavg
conavgpd chk_file X X X X
concacc chk_index X X X
concadd chk_rec X X X
concmbn
concrop chk_times X X X
concsum clusend X X X X
conedit cluslist X X X X
confreq
conhavg clusmem X X X X
coninfo clusplot X X X X
conlight cluster X X X X
conmask
conmaxpd con2arcv X X X
conmaxv con2asc X X X X
conmerge con2cdf4 X X
conpuff
conread con2ctbt X X X
constats con2dose X X X
mergextr con2grad X X X
tcmsum
con2inv X X X X
Ensembles con2rem X X X X
con2srs X X X
accudiv con2stn X X X X
ensperc
enstala conappend X X X
var2datem conavg X X X
conavgpd X X X
Graphics Creation concacc X X X
boxplots concadd X X X X
concplot concmbn X X X
contour concplot X X X X
ensplots
gridplot concrop X X X X
isochron concsum X X X
parhplot condecay X X X
parsplot
parvplot conedit X X X
parxplot confreq X X X
poleplot conhavrg X X X X
scatter
showgrid coninfo X X X X
stabplot conlight X X X X
timeplot conmask X X X
trajplot
volcplot conmaxpd X X X
conmaxv X X X
Graphics Utilities conmerge X X X
conprob X X X X
cat2svg
catps2ps conpuff X X X
coversheet conread X X X X
gelabel
constats X X X
gen2xml
splitsvg constnlst X X X
stn2ge content X X X
contour X X X X
HYSPLIT Configuration
coversheet
dat2cntl dat2cntl X X X X
dustbdy data_avrg X X X
dustedit
data_del X X X
fires
firew data_year X X X
goes2ems datecol X X X
hy{c|t}{s|m}_{xxx}
datesmry X X X
hysptest
latlon dbf2txt X X X
nuctree drn2arl X X X
printbdy
dustbdy X X X X
testnuc
timeplus dustedit X X X
vmsmerge
edit_flux X X X
vmsread
edit_head X X X
zcoord
edit_index X X X
Meteorological Data to ARL Format edit_miss X X X
edit_null X X X
api2arl
apidump ensperc X X X
arw2arl ensplots X X X X
content enstala X X X
drn2arl
grib2arl file_copy X X X X
narr2arl file_merge X X X
sfc2arl filedates X X X
snd2arl
stn2arl findgrib X X X
fires X X X
Meteorological Data Editing firew X X X
gelabel X X X X
add_data
add_grid gen2xml X X X
add_miss goes2ems X X X
add_time grib2arl X X
add_velv
aer2arl gridplot X X X X
arl2meds grid2xyll X X X
data_avrg hycm_ens X
data_del
edit_flux hycm_std X
edit_head hycs_cb4 X X X
edit_index hycs_ens X X X X
edit_miss
edit_null hycs_gem X X X X
file_copy hycs_grs X X X
file_merge hycs_ier X X X
pole2merc
rec_copy hycs_so2 X X X
rec_insert hycs_std X X X X
rec_merge hycs_var X X X X
xtrct_grid
xtrct_time hysptest X X X X
hyts_ens X X X X
Meteorological Data Examination hyts_std X X X X
inventory X X X
arl2grad
chk_data isochron X X X X
chk_file latlon X X X X
chk_index lbfgsb X X X X
chk_rec
chk_times macc2date X X X
data_year matrix X X X X
datecol mergextr X X X
datesmry
filedates merglist X X X X
findgrib metdates X X X
gridxy2ll metlatlon X X X
inventory
metdates metpoint X X X
metlatlon narr2arl X X X
metpoint nuctree X X X X
profile
unpacker par2asc X X X
velvar par2conc X X X
vmixing parhplot X X X X
xtrct_stn
parmerge X X X
Particle Utilities paro2n X X X
parshift X X X X
asc2par parsplot X X X X
par2asc
par2conc parvplot X X X X
parmerge parxplot X X X X
paro2n pole2merc X X X
parshift
stn2par poleplot X X X X
prntbdy X X X
Shapefile Manipulation profile X X X X
rec_copy X X X
ascii2shp
dbf2txt rec_insert X X X
txt2dbf rec_merge X X X
scatter X X X X
Trajectory Analysis
sfc2arl X X X
clusend showgrid X X X X
cluslist snd2arl X X X
clusmem
splitsvg X X X X
clusplot
cluster stabplot X X X
merglist stat2grid X X X X
trajfind
statmain X X X X
trajfreq
trajfrmt stn2arl X X X X
trajgrad stn2ge X X X X
trajmean
stn2par X X X
trakmerg
tcmsum X X X X
tcsolve X X X X
testnuc X X X
timeplot X X X X
timeplus X X X
trajfind X X X
trajfreq X X X X
trajfrmt X X X
trajgrad X X X
trajmean X X X X
trajmerg X X X
trajplot X X X X
txt2dbf X X X X
unpacker X X X
var2datem X X X
velvar X X X
vmixing X X X
vmsmerge X X X
vmsread X X X
volcplot X X X
xtrct_grid X X X
xtrct_stn X X X
xtrct_time X X X
zcoord X X X

Concentration File Conversion


c2array
USAGE: c2array -[arguments]
   -c[input to output concentration multiplier (1.0)]
   -d[write to diagnostic file (c2array.txt)]
   -h[half-life in days (0) to back decay measurments]
   -i[file name of file with names of input files (INFILE)]
   -m[measurement data file name (datem.txt)]
   -o[output concentration file name (c2array.csv)]
   -p[pollutant index select for multiple species (1)]
   -s[source position for backward mode (lat:lon)]
   -x[nearest neighbor or interpolation {(n)|i}]
   -z[level index select for multiple levels (1)]

Program to read multiple HYSPLIT concentration files and a DATEM


formatted measured data file to create a merged CSV formatted coefficient
matrix that can be used as input to a linear equation solver. The HYSPLIT
files can be forward calculations from a source point, one file per release
time, or backward calculations from multiple receptor points. The output
matrix will consist of columns each release time and one row for each
measurement placed in the last column. The HYSPLIT concentration binary
input file names are identified in an input file of file names. The program
determines the direction of the calculation (forward/backward) based upon
the time difference between successive concentration outputs. The
backward calculation is assumed to correspond to a measured value and
requires the command line entry of a source position (-s) where the model
calculations will be extracted. The half-life entry is used to decay correct
the measurement data back to the simulation start time.

GUI: srm_solve.tcl       


Tutorial: src_coef.sh

c2datem
USAGE: c2datem -[arguments]
   -c[input to output concentration multiplier]
   -d[write to diagnostic file]
   -e[output concentration datem format: (0)=F10.1 1=ES10.4]
   -h[header info: 2=input text, (1)=file name, 0=skip]
   -i[input concentration file name]
   -m[measurement data file name]
   -o[output concentration file name]
   -p[pollutant index select for multiple species]
   -r[rotation angle:latitude:longitude]
   -s[supplemental lat-lon information to output]
   -t[model output period can be longer than measurement period;
      and ending times do not have to be aligned:(0)=no, 1=yes]
   -x[n(neighbor) or i(interpolation)]
   -z[level select for multiple levels, if z=-1 read height from DATEM file]

Program to read a HYSPLIT concentration file and match the results with
measurements in the DATEM file format to create an output file of model
calculations in the DATEM file format that correspond in location and
times with the measured data.

GUI: datem.tcl       


Tutorial: conc_stat.sh

con2arcv
Usage: con2arcv [6 character file ID]
Input: cdump
Output: YYMMDDHH_KP_KL_{6charID}.flt and .hdr

Converts binary concentration file to ESRI's Arcview binary raster file


format as one output file per sampling time period for each pollutant (KP)
and level (KL).

con2asc
USAGE: con2asc -[options (default)]
   -c[Convert all records to one diagnostic file]
   -d[Delimit output by comma flag]
   -f[File flag for a file for each pollutant-level]
   -i[Input file name (cdump)]
   -m[Minimum output format flag]
   -o[Output file name base (cdump)]
   -s[Single output file flag]
   -t[Time expanded (minutes) file name flag]
   -u[Units conversion multiplier concentration (1.0)]
   -U[Units conversion multiplier deposition (1.0)]
   -v[Vary output by +lon+lat (default +lat+lon)]
   -x[Extended precision flag]
   -z[Zero points output flag]

Converts a binary concentration file to a latitude-longitude based ASCII


file, one record per grid point, for any grid point with any level or pollutant
greater than zero. One output file is created for each sampling time period.

GUI: conc_asc.tcl       


Tutorial: conc_util.sh

con2cdf4
Usage: con2cdf4 [options] inpfile outfile
   Options
     -d : NetCDF deflate level (0-9, default 1)
     -h : Show this help message and exit
   Arguments
     inpfile : Input HYSPLIT cdump file name
     outfile : Output NetCDF file name

Converts HYSPLIT binary cdump concentration output to NetCDF

con2ctbt
USAGE: con2ctbt -[options (default)]
   -i[Input file name (cdump)]
   -o[Output file name base (cdump.srm)]
   -e[Emission amount (1.3E+15)]
   -c[Concentration number to output (1)]
   -s[Start site label (XXX00)]

Converts a binary concentration file to a latitude-longitude based ASCII


file, one record per grid point, for any grid point not zero, in the agreed
upon Comprehensive Test Ban Treaty Organization exchange format. See
con2srs for a newer variation of this program compatible with output fields
generated by FLEXPART.
con2dose
Usage: con2dose [input file] [output file]

Temporally averages the input binary concentration file (output from


HYSPLIT), converts to dose units and outputs a new binary dose file. The
calculation applies only for long-term doses and requires the con2dose.dat
file which contains various dose conversion factors (EDE, bone, lung,
thyroid, TEDE, etc.) for each radionuclide. The con2dose.dat file is only
provided with the source code distribution. Requires each pollutant ID in
the HYSPLIT output file to have a matching radiological species ID in the
con2dose.dat file. This approach has been replaced by the more general
Transfer Coefficient Matrix (TCM) where surrogate species are used in the
HYSPLIT calculation and where decay is also applied in the post-
processing step.

con2grad
Usage: con2grads [HYSPLIT filename]
  Output:
    concen.grd - Grads binary concentration data
    concen.ctl - Grads control script
    species.dep - Grads display script
    species.con - Grads display script

Converts a binary HYSPLIT concentration file to Grads binary format

con2rem
USAGE: con2rem -[options(default)]
   -a[activity input file name (activity.txt) or {create|fdnpp}]
   -b[Breathing rate for inhalation dose in m3/hr (0.925)]
   -c[Output type: (0)-dose, 1-air conc/dep]
   -d[Type of dose: (0)=rate (R/hr) 1=dose over averging period (R)]
   -e[include inhalation dose (0)=No 1=Yes]
   -f[fuel (1)=U235 2=P239]
   -g[normal decay=0 (used only with c=1), time averaged decay (1)]
   -h[help with extended comments]
   -i[input file name (cdump)]
   -n[noble gas 4-char ID (NGAS)]
   -o[output file name (rdump)]
   -p[process (1)=high-energy 2=thermal]
   -q[when -d1 convert dose from rem=(0) to sieverts=1]
   -s[sum species (0), 1=match to input, 2=output each species]
   -t[no decay=0 input decayed, or decay by species half-life (1)]
   -u[units conversion Bq->pCi, missing: assume input Bq]
   -w[fission activity in thermal mega-watt-hours, replaces -y option]
   -x[extended integration time in hours beyond calculation (0)]
   -y[yield (1.0)=kT]
   -z[fixed integration time in hours (0)]

Converts a HYSPLIT binary concentration/deposition file to dose in rem/hr.


The HYSPLIT calculation should be done using a unit emission so that the
concentration units are m-3 and the deposition units are m-2. This program
reads the file activity.txt which contains the activity (Bq) at time=0 for all
the isotope products. The -acreate switch will create a sample activity.txt
file for a 1KT device, while the -afdnpp switch creates a sample file where
the activity equals the maximum 3h emissions from the FDNPP accident.
During the processing step, the cumulative product of the activity and dose
factor is computed for each decay weighted concentration averaging period
independently for noble gases and particles. For general applications,
HYSPLIT should be configured for two species: NGAS and RNUC. For
most radiological dose applications, this method is preferred over the
original approach using con2dose because decay is treated in the post-
processing step and multiple radionuclides can be assigned to a single
computational species. This approach works best when emissions are
constant. For time-varying emissions, use the program condecay before
running con2rem.

GUI: con2rem.tcl       


Tutorial: dose_cemit.sh

con2srs
USAGE: con2srs -[options (default)]
   -i[Input file name (cdump)]
   -o[Output file name base (cdump.srm)]
   -e[Emission amount (1.3E+15)]
   -c[Concentration number to output (1)]
   -l[Level to output (1)]
   -s[Start site label (XXX00)]
   -r (specifies regional grid. Global by default)
   -d (process the deposition grid; off by default; overrides -l)

Converts binary concentration file to a latitude-longitude based ASCII file,


one record per grid point, for any grid point not zero. Adheres to the
informal source-receptor-sensitivy (SRS) output format that seems to be in
use with FLEXPART people for storing concentrations, backwards
sensitivities, and even depositions. This program forces a simple output - a
single species at a single level. It is up to other programs to combine SRS
files into something more complex, if and when desired. See con2ctbt for
the original version of this program.

con2stn
USAGE: con2stn [-options]
   -a[rotation angle:latitude:longitude]
   -c[input to output concentration multiplier]
   -d[mm/dd/yyyy format: (0)=no 1=w-Label 2=wo-Label]
   -e[output concentration datem format: (0)=F10.1 1=ES10.4]
   -h[half-life-days (one entry for each pollutant)]
   -i[input concentration file name]
   -m[maximum number of stations (200)]
   -o[output concentration file name]
   -p[pollutant index (1) or 0 for all pollutants]
   -r[record format 0=record (1)=column 2=datem]
   -s[station list file name (contains: id lat lon)]
   -t[transfer coefficient processing]
   -x[(n)=neighbor or i for interpolation]
   -z[level index (1) or 0 for all levels]

Program to read a hysplit concentration file and print the contents at one or
more locations to an output file.

GUI: conc_stn.tcl       


Tutorial: conc_util.sh

condecay
USAGE: condecay -[One Entry per Conversion] +[options(default)]
   -{Cnumb:Index:HalfL:Radio}
       Cnumb=column number in emission file
       Index=index number in binary input file
       HalfL=half life in days
       Radio=radionuclide character id
   +p[Process ID number]
   +d[Directory for TG_{YYYYMMDDHH} files]
   +e[Emissions base file name (cfactors).txt]
   +i[Input file base name (TG_)]
   +o[Output file base name (DG_)]
   +t[Time decay start: YYYYMMDDHH
       | 0000000000 each file
       | default from file #1]

Processes a series of binary unit-source concentration files, which consists


of a file name starting with TG_{YYYYMMDDHH} that identifies the
associated release time. The program command line contains one entry for
each species to be multiplied by a source term and decayed by its half-life to
the end of the sample collection period. Not all species in the input file need
to be converted. The resulting output file name is called
DG_{YYYYMMDDHH}. Emission values are defined in the input file
named cfactors.txt. All concentration input files must be identical in terms
of species and grid resolution. This program would be used in conjunction
TCM processing applications. In the situation where the emission rate is
constant with time, the program con2rem can be used to apply emissions,
decay, and convert to dose. However, when emissions are time-varying,
then the concentration file for every release period (TG files) would be
processed by condecay for the emissions and decay, and then by con2rem
for the final conversion to dose.
Tutorial: dose_temit.sh

conprob
Usage: conprob [-options]
   -b[base input file name (cdump)]
   -c[(conc_high:conc_mid:conc_low) set values]
   -d[dignostic output turned on]
   -p[pollutant index number (when input contains more than 1)]
   -t[temporal aggregation period (1)]
   -v[value below which concentration assumed to equal zero (0.0)]
   -x[Concentration multiplier: (1.0)]
   -y[Deposition multiplier: (1.0)]
   -z[level index number (when input contains more than 1)]

Reads multiple HYSPLIT concentration files and computes various


probability levels independently at each grid point and then creates a new
ouput file for each probability level (files=probXX). Also computed are the
probabilities to exceed predefined concentration levels
(files=cmax{01|10|00}) from minimum to maximum. Other output files
include the mean, variance, coefficient of variation, and number of
members.

GUI: prob_files.tcl       


Tutorial: ens_data.sh

constnlst
USAGE: constnlst [-options]
   -i[Input file name of concentration file names]
   -o[output concentration file name]
   -a[start time (YYMMDDHHMM)]
   -b[stop time (YYMMDDHHMM)]
   -e[list concentrations (0) or 1 sum between start and stop times]
   -h[write header to output (0) or 1 for no headers]
   -w[output sample start time (0) or 1 simulation start time]
   -c[input to output concentration multiplier]
   -s[station list file name (contains: id lat lon)]
   -x[(n)=neighbor or i for interpolation]
   -z[level index (1) or 0 for all levels]
   -p[pollutant index (1) or 0 for all pollutants]
   -r[record format 0=record (1)=column 2=datem]
   -m[maximum number of stations (200)]
   -d[mm/dd/yyyy format: (0)=no 1=w-Label 2=wo-Label]

Program to read a HYSPLIT concentration file and print the contents at


selected locations for a given time (if -a and -b are the same) or sums
concentrations for a range of times. This program is similar to con2stn but
with some different options required for certain web applications.
lbfgsb
USAGE: lbfgsb
Input: PARAMETER_IN_000
Output: SOURCE_OUT_000

The code will solve for values of the source emission rate vector to satisfy
the measured values using a cost function approach to minimize the
difference between the observations and model predictions by varying the
source term from a first-guess estimate. The model predictions file should
be in CSV format with the last column corresponding to the measurements.
The first row is the time associated with each unknown source. The CSV
file could be created using the program C2ARRAY. All solution
configuration values are set in the PARAMETER_IN file. See the program's
README file for more information.

GUI: srm_lbfgsb.tcl       


Tutorial: src_cost.sh

macc2date
Usage: macc2date [filein] [fileout]

Program reads the output file of the cost function (lbfgsb) analysis and
converts the time field from EXCEL fractional days format to year, month,
day, hour, value, where value is the emission rate. The output is in the free-
form cfactors format of time-varying emissions in much simpler format the
EMITIMES. This file is normally referred to as cfactors.txt and it is used by
programs tcmsum and concdecay.

GUI: srm_sum.tcl for use of cfactors.txt

matrix
Usage: matrix [-options]
   -i[input file name]
   -o[output file name]
   -y[latitude]
   -x[longitude]
   -z[level vertical index]
   -m[source (s) or receptor (r) mode]
   -f[force release date: yymmddhh]
   -d[date sample select: mmddhhnn]
   -n[normalization on]

The matrix program is used to extract source or receptor information from


the HYSPLIT generated source-receptor matrix. Two output modes are
available. In receptor mode a receptor location is specified and the program
computes the contribution of each source to the selected receptor point. In
source mode a source location is specified and the program outputs the
concentrations at all the receptor locations. All output files are in standard
HYSPLIT binary.

GUI: srm_disp.tcl       


Tutorial: src_recp.sh

stat2grid
USAGE: stat2grid [-options]
   -i{in file}
   -o{out file}
   -v{variable #(3 to 8)}}

Program reads the statmain statistics output file (from -s) and converts the
values by position to the HYSPLIT concentration grid format permitting
plots of model performance statistics with respect to location. The -v selects
one of the following metrics: corr(3), nmse(4), fb(5), fms(6), ksp(7), and
rank(8). The index value represents the column number in the statmain
output file.

GUI: conc_rank.tcl       


Tutorial: src_stats.sh

statmain
USAGE: statmain [arguments]
  -a[averaging: space (s), time (t), low-pass filter (#)]
  -b[bootstrap resampling to compute correlation]
  -c[concentration normalization factor]
  -d[measurement directory or file name (when -t0)]
  -e[enhanced output in supplemental file]
  -f[station selection file suffix]
  -g[generate random initial seed for bootstrap resampling]
  -l[contingency level for spatial statistics (1)]
  -m[model variation string (when -t<>0]
  -o[write (1) merged data or read (2) merged file]
  -p[plume statistics (both meas & calc gt zero)]
  -r[calculation results directory or file name (when -t0)]
  -s[supplemental appended output file name]
  -t[tracer character number: (0),1,2,3, etc]
  -x[exclude 0,0 pairs]
  -y[set calculated to zero when below measured zero]
  -z[percentile level of non-zero for zero measured]

Program reads DATEM formatted measured and calculated air


concentration files and perform some elementary statistical analyses for
model comparison. Procedures are based upon the original ETEX workshop
metrics. Input and output file names can be generated automatically when
the tracer character ID is set to a non-zero value.

GUI: datem.tcl       


Tutorial: src_stats.sh
tcsolve
Usage: tcsolve -[options]
   -i[input CSV matrix (c2array.csv)]
   -o[output file (tcsolve.txt)]
   -p[percent delete (0)]
   -u[units conversion(1)]
   -z[zero value(0)]

The code will compute the inverse of the coefficient matrix (CM) and solve
for values of the source emission rate vector to satisfy the measured values.
The default approach is to use Singular Value Decomposition on the CM,
which is defined by M equations (receptors) and N unknowns (sources).
The input file should be in CSV format with the last column corresponding
to the measurements. The first row is the time associated with each
unknown source. The CSV file could be created using the program
C2ARRAY which processes the HYSPLIT binary coefficients and DATEM
formatted measurement data files. The percent delete is the percentage of
the lowest TCM values to be deleted.

GUI: srm_solve.tcl       


Tutorial: src_coef.sh

Concentration File Utilities


con2inv
USAGE: con2inv -[options(default)]
   -i[input file name (cdump)]
   -o[output file name with base format (concinv.bin)]
   -d[process id (bin)]

Program to output the inverse of the concentration for source attribution


calculations. The pollutant 4-char ID remains the same.

GUI: conc_invr.tcl

conappend
USAGE: conappend -[options]
   -i[File name of containing input file names]
   -o[Output summation file]
   -c[Concentration conversion factor (1.0)]

Program to read multiple HYSPLIT concentration files and append the


values into a single file. The files all need to be identical in terms of grid
size but each file would represent a different time period.

conavg
Usage: conavg [-options]
   -b[base input file name (cdump)]
   -d[dignostic output turned on]
Output: cmean

Reads multiple identical concentration files, in terms of the grid, and


computes the mean value at each grid point, which is then written to the
output file cmean. Note individual time headers are not checked and the
output time fields are the same as the last input file. This program provides
a quick way to generate an ensemble mean rather than using conprob which
generates all the probability files including the mean.

conavgpd
USAGE: conavgpd -[options(default)]
   -i[input file name (cdump)]
   -o[output file name (xdump)]
   -m[concentration multiplier (1.0)]
   -h[averging period in hours]
   -a[start averaging period (YYMMDDHHMN)]
   -b[stop averaging period (YYMMDDHHMN)]
   -r[average (0) or sum=1]

Combines multiple sequential time periods from a binary concentration file


and computes the average or sum based upon the period (-h) specified by
the user and writes out a new concentration file.
Tutorial: dose_cemit.sh

concacc
USAGE: concacc -[options(default)]
   -i[input file name (cdump)]
   -o[output file name with base format (concacc.bin)]

Program to accumulate concentrations from one time period to the next and
output the results to another file. For example, the program can be used to
sum doses from individual time periods to get the total dose.
Tutorial: ind_test.sh

concadd
USAGE: concadd -[options(default)]
   -i[input file name (cdump)]
   -b[base file name to which input file is added (gdump)]
   -o[output file name with base file format (concadd.bin)]
   -g[radius (grid points around center) to exclude; default=0]
   -c[concentration conversion factor (1.0)]
   -p[process (0)=add | 1=max value | 2=zero mask | 3=intersect]
         | 4=replace | 5=divide c1/c2]
   -t[forces the sampling time start stop times as minimum and maximum]
   -z[zero mask value (base set to zero when input>mask; default=0.0)]
       if zero mask value < 0 :
       base set to zero when input> zero mask value * base

Program to add together two gridded HYSPLIT concentration files, where


the input file is added into the base file and written as a new file. The input
and base file need not be identical in time, but they must have the same
number of pollutants and levels. The file contents are matched by index
number and not height or species. The horizontal grids need to be identical
or even multiples of each other and they also need to intersect at the corner
point. Summations will occur regardless of any grid mismatches. Options
are also available to select the maximum value or define the input file as a
zero-mask such that grid points with non-zero values become zero at those
locations in the base file when written to the output file. The intersect option
only adds the input file to the base file when both have values greater than
zero at the intersection point, otherwise the value is set to zero. The replace
option will replace the value from the base file with the value from the input
file. This option is normally used in conjunction with a non-zero radius. The
-t flag forces the time labels for each sampling period to represent the
minimum starting time and maximum stop time between the two input files.

GUI: conc_add.tcl       


Tutorial: ind_test.sh

concmbn
USAGE: concmbn -[options(default)]
   -b[crop the final grid: yes:(1), no:0]
   -c[coarse grid file name (cdump)]
   -f[fine grid file name (fdump)]
   -m[Multiplier used with -v option: (1.0)-one (default)]
   -o[output file name (concmbn.bin)]
   -p[percent of white space added around plume (10)]
   -s[number of surrounding coarse grid points to average, (0)=none, x]
   -t[number of perimeter fine grid points to average, (0)=none, x]
   -v[Minimum value to extract to: (0.0)-zero (default) ]

Program to combine two gridded HYSPLIT concentration files; one a fine


grid and one a coarse grid. The coarse grid is recalculated on a grid
covering the same area except that the grid has the resolution of the fine
grid provided. The file contents are matched by index number but not height
or species. You must make sure that the fine grid spacing and span are even
multiples of the coarse grid spacing and span. The coarse grid points must
be also be points on the fine grid. Summations will occur regardless of any
grid mismatches. Prior to writing the coarse grid values into to the final
large fine grid, the new fine grid values are given a 1/r2 weighting using the
surrounding coarse grid values.

concrop
USAGE: concrop -[options (default)]
   -b[latmin:lonmin:latmax:lonmax (all zeroes will produce full grid extract]
   -d[process id (txt)]
   -f[FCSTHR.TXT: (0)-none 1-output]
   -i[Input file name: (cdump)]
   -g[Grid limits rather than cropped file: (0)-no 1-center-radius -1-lat-lons]
   -o[Output file name: (ccrop)]
   -m[Multiplier used with -v option: (1.0)-one (default) ]
   -p[Time extract MMDDHH:MMDDHH]
   -v[Minimum value to extract to: (0.0)-zero (default) ]
   -w[Percent of white space added around plume (10)]
   -x[override white space: (0)-no (default) 1-yes]

Removes the whitespace around a HYSPLIT binary concentration grid file


by resizing the grid about the non-zero concentration domain or by
specifying a sub-grid on the command line. Using the -x option will force
the extract of the selected domain regardless of white space.

GUI: conc_xtrct.tcl

concsum
USAGE: concsum -[options(default)]
   -i[input file name (cdump)]
   -o[output file name with base format (concsum.bin)]
   -d[process id (bin)]
   -l[level sum and pollutant sum]
   -p[pollutant sum label (SUM)]

Program to add multi-pollutant concentrations from one HYSPLIT


concentration file and output the sum (as one pollutant) to another file (e.g.
to get the total concentration when pollutants are different particle sizes).
The level sum flag sums the pollutants and levels to one output field.
Tutorial: dose_cemit.sh

conedit
USAGE: conedit [options]
   -i[input file name (cdump)]
   -o[output file name (cedit)]
   -m[meteorology string (skip)]
   -p[pollutant string (skip)]

Edits the HYSPLIT binary concentration file by replacing the meteorology


and pollutant type four character identification strings. Only files with one
pollutant are supported. A string is left unchanged if not defined.

confreq
Usage: confreq [-arguments]
   -h{elp}
   -f{ile name of file names}
   -z{eros included}

Computes the concentration frequency distribution for one or more


HYSPLIT binary concentration files specified in the file name of input files.
All levels and species are included. Mismatches in time and grid are
ignored. The current compilation maximum sort dimension is one million.

conhavrg
USAGE: conhavrg [options]
   -i[input file name (cdump)]
   -o[output file name (cavrg)]
   -s[surrounding grid points to average (1)]

Horizontally averages the HYSPLIT concentration file according to a grid


point scan distance (s) and where the area average equals 2s*2s. Scan
distance is not adjusted for differences in longitude distance with latitude.
Mass is adjusted to insure that the average concentration remains the same.

GUI: conc_havg.tcl

coninfo
USAGE: coninfo -[options(default)]
   -i[input file name (cdump)]
   -t[list start/stop times (0=no), 1=yes]
   -j[output only first/last times in file (0=no), 1=Julian, 2=date]

Program prints summary information about the contents of a HYSPLIT


concentration file such as the grid information, the start and stop times of
each time period, or just the first and last time period.
GUI: conc_xtrct.tcl

conlight
USAGE: conlight -[options(default)]
   -i[input file name (cdump)]
   -o[output file name (xdump)]
   -l[level number to extract (0 = all)
   -m[concentration multiplier (1.0)]
   -p[period MMDDHH:MMDDHH]
   -s[species: 0-sum, (1)-select]
   -t[time periods to extract (1)]
   -z[concentration minimum (0.0)]
   -y[concentration minimum for sum pollutants (0.0)]

Extracts individual records from the binary concentration file where every -
t{count}th record is output and if the file contains multiple species, the -
s{pecies} number is selected for output. Multiplier and minimum values
may be applied.

GUI: conc_add.tcl       


Tutorial: ens_stats.sh

conmask
Usage: conmask [input] [mask] [output] [cmin]
   [input] - name of input file of concentrations
   [mask] - name of mask file where input may = 0
   [output] - name output file from input*mask
   [cmin] - input gt cmin mask then set input = 0

The program reads two HYSPLIT concentration files and applies the second
file as a mask to the first. Any non-zero values in file #2 becomes zero in
file #1. For example, in source attribution calculations, the second file can
be used to eliminate grid cells that do not contain the source because the
backward calculation was associated with a measurement of zero.

conmaxpd
USAGE: conmaxpd -[options(default)]
   -i[input file name (cdump)]
   -o[output file name (xdump)]
   -m[concentration multiplier (1.0)]
   -h[window period for maximum in hours]
   -a[start time for window (YYMMDDHH)]
   -b[stop time for window (YYMMDDHH)]

Computes the maximum concentration at each grid point per time window
for all time windows that fall between the window start and stop times.

conmaxv
USAGE: conmaxv [options]
   -i[input file name (cdump)]
   -o[output file name (cavrg)]
   -s[sliding time window in minutes (-1, defaults to data interval)]

Computes the maximum value at each grid cell using a sliding time
window. The sliding time interval must be evenly divisible by the averaging
(data interval) time. The output represents a single time period, the
maximum value over the entire time period. For example, for a one hour
simulation, with output every 5 minutes, there would be 12 time periods in
the HYSPLIT concentration output file. If the sliding time window is set to
15 minutes, at each grid point, the program computes 12 average
concentrations, for time periods 1,2,3 and then 2,3,4, and so on. A single
maximum value of those 12 sliding window averages is written to the
output file at that grid point. Maximum concentrations are used primarily in
hazardous chemical exposure calculations.

conmerge
USAGE: conmerge -[options]
   -d[Date to stop process: YYMMDDHH {00000000}]
   -i[Input file name of file names]
   -o[Output file name]
   -t[Time summation flag]

The program reads multiple HYSPLIT concentration files and sums the
values to a single file. Options are to sum only matching time periods or
sum all time periods into one time period.
Tutorial: dose_temit.sh

conpuff
Usage: conpuff
   prompted standard input:
   line 1 - Concentration file name
Output: standard

Reads the gridded HYSPLIT concentration file and prints the maximum
concentration and puff mass centroid location for each concentration
averaging period.
conread
Usage: conread
   prompted standard input:
   line 1 - Concentration file name
   line 2 - Extended diagnostics (0/1)
Output: standard

Program to read the gridded HYSPLIT concentration file and dump selected
statistics each time period to standard output.

GUI: conc_file.tcl       


Tutorial: conc_util.sh

constats
USAGE: constats {arguments}
   -f#[concentration file name (#<=2)]
   -o[output file name; undef stdout]
   -t[temporal match skip hour]
   -v[verbose output]

Compares two HYSPLIT concentration files [-f1{name} and -f2{name}] by


computing the FMS overlap statistic with the assumption that both grids
must be identical in terms of grid size, levels, pollutants, and number of
time periods. When the -t flag is set, time mismatches are ignored.

mergextr
USAGE: mergextr [-arguments (default)]
   -a[date to start YYMMDDHH]
   -b[date to stop YYMMDDHH]
   -i[input file name of file names (mergelist.txt)]
   -o[Output file name: (xtrct.txt)]

This program will read a file of filenames (format: DG_YYYYMMDDHH)


and create a new file of filenames with only the dates between and
including the dates entered. The DG_ file name convention is usually
associated with the concdecay program.

tcmsum
USAGE: tcmsum -[options(default)]
   -c[column number in source term file (1)]
   -d[output file name suffix (bin)]
   -i[input file name (cdump)]
   -o[output file base name (tcmsum)]
   -h[half-life-days (0.0 = none)]
   -p[pollutant sum label (SUM)]
   -s[source term input file name (cfactors.txt)]
   -t[time decay start: MMDDHH (run_start)]

Program to add together the pollutant concentrations from one HYSPLIT


concentration file, where the pollutants represent different starting times as
configured from a HYSPLIT Transfer Coefficient Matrix (TCM) simulation
with ICHEM=10. The concentrations for each starting time are multiplied
by a source term defined in an auxiliary input file.

GUI: srm_sum.tcl

Ensembles
accudiv
USAGE: accudiv [-options]
Reads DATEM formatted measurement and modeled data and calcuates the
accuracy/diversity among all the possible combinations of pairs, trios, etc.
   -b[base model output file name)]
   -m[measurement input file name ]
   -o[output file name ]

Applies a reduction technique to an ensemble. In this technique all the


possible model combinations are tested and the chosen subensemble is the
one that shows the minimum square error. The measurements and model
results should be defined by DATEM format files with an identical number
of records.

GUI: ens_reduc.tcl       


Tutorial: ens_reduc.sh

ensperc
USAGE: ensperc [-options]
   -b[base model output file name)]
   -m[measurement input file name ]
   -o[output file name ]

Reads a DATEM formatted measurement file and model data files and
calculates concentration percentiles for ensemble runs and measured values.
The program requires DATEM formatted model outputs and measurements.
Sequential numbers (.000) are automatically appended to the base name to
generate the ensemble member file name. The cumulative concentration
distribution is computed independently for measured and calculated pairs
when both are non-zero.

enstala
USAGE: enstala [-options]
   -b[base model output file name)]
   -m[measurement input file name ]
   -o[output file name ]

Reads a DATEM formatted measurement file and model data files and
calculates values to generate a Talagrand diagram. Sequential numbers
(.000.txt) are automatically appended to the base name to generate the
ensemble member file name.

var2datem
USAGE: var2datem -[arguments]
   -c[input to output concentration multiplier]
   -e[output concentration datem format: (0)=F10.1 1=ES10.4]
   -r[random number generator seed number (-1)]
   -p[percent standard deviation (10%) ]
   -n[minimum concentration (7.0) ]
   -d[write to diagnostic file]
   -h[header info: 2=input text, (1)=file name, 0=skip]
   -m[measurement data file name]
   -s[supplemental information ]
   -o[output concentration file name]

Program to read DATEM file and output randomly generated DATEM file
results given a standard deviation and minimum concentration. It is a way to
evaluate measurement uncertainty when computing model performance
statistics.

Graphics Creation
boxplots
USAGE: boxplots [-arguments]
   -a[ascii output file]
   -c[concentration conversion factor]
   -d[datem formatted measurement file]
   +g[graphics: (0)-Postscript, 1-SVG
   -l[level index number (1)]
   -m[minimum scale for plot]
   -M[maximum scale for plot]
   -n[number of divisions (10)]
   -p[pollutant index number (1)]
   -s[start time period (1)]
   -t[title string]
   -u[units string for ordinate]
   -x[longitude]
   -y[latitude]
Output: boxplots.{ps|html}

Program to read the probability files output from the program conprob and
create up to 12 (time periods) box plots per page. The probability files
consist of binary concentration fields for each probability level. Box plots
are created for a specific location.

GUI: disp_boxp.tcl       


Tutorial: ens_data.sh

concplot
USAGE: concplot -[options (default)]
   -i[Input file name: (cdump)]
   -o[Output file name: (concplot.{ps|html})]
   +g[graphics: (0)-Postscript, 1-SVG
   and 38 additional options (see command line for details)

Primary graphical display program for HYSPLIT binary concentration files.


The data are contoured and color-filled against a map background. At a
minimum, only the name of the input file is required. Colors, contour
intervals, map background, and label details may be adjusted through the
approximately 40 command line options.

GUI: conc_rank.tcl       


Tutorial: conc_disp.sh

contour
Usage: contour [-options]
   -d[Input metdata directory name with ending /]
   -f[input metdata file name]
   +g[graphics: (0)-Postscript, 1-SVG
   -y[Map center latitude]
   -x[Map center longitude]
   -r[Map radius (deg lat)]
   -v[Variable name to contour (e.g. TEMP)]
   -l[Level of variable (sfc=1)]
   -o[Output time offset (hrs)]
   -t[Output time interval (hrs)]
   -c[Color (1/3) or B&W (0/2); 0/1=lines 2/3=nolines]
   -g[Graphics map file (arlmap) or shapefiles.txt]
   -m[Maximum contour value (Auto=-1.0)]
   -i[Increment between contours (Auto=-1.0)]
   -a[Arcview text output]
Output: contour.{ps|html}

Contour fields from a meteorological data file in ARL format

GUI: disp_map.tcl       


Tutorial: traj_flow.sh

ensplots
USAGE: ensplots [-arguments]
   -b[base name for concentration files]
   -c[concentration conversion factor]
   +g[graphics: (0)-Postscript, 1-SVG
   -m[minimum scale for plot]
   -M[maximum scale for plot]
   -n[number of divisions (10)]
   -x[longitude]
   -y[latitude]
Output: ensplots.{ps|html}

Program to read the probability files output from the program conprob and
create up to 12 (time periods) member plots per page in a format similar to
boxplots. The graphic shows the member number distribution by
concentration for a single location.

GUI: disp_box.tcl       


Tutorial: ens_data.sh

gridplot
USAGE: gridplot -[options(default)]
   -a[scale: 0-linear, (1)-logarithmic]
   -b[Science on a Sphere output: (0)-No, 1-Yes)]
   -c[concentration/deposition multiplier (1.0)]
   -d[delta interval value (1000.0)]
   -f[(0), 1-ascii text output files for mapped values]
   -g[GIS: 0-none 1-GEN(log10) 2-GEN(value) 3-KML 4-partial KML]
   +g[graphics: (0)-Postscript, 1-SVG
   -h[height of level to display (m) (integer): (0 = dep)]
   -i[input file name (cdump.bin)]
   -j[graphics map background file name: (arlmap)]
   -k[KML options: 0-none 1-KML with no extra overlays]
   -l[lowest interval value (1.0E-36), -1 = use data minimum]
   -m[multi frames one file (0)]
   -n[number of time periods: (0)-all, numb, min:max, -incr]
   -o[output name (plot.ps)]
   -p[process output file name suffix]
   -r[deposition: (1)-each time, 2-sum]
   -s[species number to display: (1); 0-sum]
   -u[mass units (ie, kg, mg, etc)]
   -x[longitude offset (0), e.g., -90.0: U.S. center; 90.0: China center]
   -y[latitude center (0), e.g., 40.0: U.S. center]
   -z[zoom: (0=no zoom, 99=most zoom)

Creates postscript or html file to show concentration field evolution by


using color fill of the concentration grid cells. It is designed especially for
global grids such as Science on a Sphere output, which assumes the
concentration file is a global lat/lon grid.

GUI: conc_grid.tcl       


Tutorial: conc_disp.sh

isochron
USAGE: isochron -[options(default)]
   -d[time interval in hours, (-1)=automatic selection]
   -f[Frames: (0)-all frames one file, 1-one frame per file]
   -g[GIS: 0-none 1-GEN(log10) 2-GEN(value) 3-KML 4-partial KML]
   +g[graphics: (0)-Postscript, 1-SVG
   -h[level index (1); 0-all]
   -i[input file name (cdump.bin)]
   -j[Graphics map background file name: (arlmap)]
   -k[KML options: 0-none 1-KML with no extra overlays]
   -m[concentration multiplier (1.0)]
   -n[number of contours (12)]
   -o[output name (toa.ps or toa.xxx where xxx is defined by -p)]
   -p[Process output file name suffix]
   -s[species number to display: (1); 0-all]
   -S[species output table created: (0)-no, 1-yes]
   -t[lowest threshold value (1.0E-36)
   -u[mass units (ie, kg, mg, etc)]
   -x[longitude offset (0), e.g., -90.0: U.S. center; 90.0: China center]
   -y[latitude center (0), e.g., 40.0: U.S. center]
   -z[zoom: (0=no zoom, 99=most zoom)

Creates a postscript or html file to show the time it takes for a concentration
grid cell to become non-zero after the start of the simulation. Times are
designated by using color fill of the concentration grid cells.

GUI: conc_time.tcl       


Tutorial: conc_disp.sh

parhplot
USAGE: parhplot -[options(default)]
   -a[GIS output: (0)-none, 1-GENERATE, 3-kml]
   +g[graphics: (0)-Postscript, 1-SVG
   -i[input file name (PARDUMP)]
   -k[Kolor: (0)-B&W 1-Color]
   -m[scale output by mass 1-yes (0)-no]
   -s[select output species (1)-first specie, 0-sum all, or species id number]
   -n[plot every Nth particle (1)]
   -o[output file name (parhplot.ps)]
   -j[Map background file: (arlmap) or shapefiles.txt]
   -p[Process file name suffix: (ps) or process ID]
   -t[age interval plot in hours (0)]
   -z[Zoom factor: 0-least zoom, (50), 100-most zoom]

Plot the horizontal mass distribution from a PARDUMP file.

GUI: disp_part.tcl       


Tutorial: disp_part.sh

parsplot
USAGE: parsplot -[options(default)]
   -i[input file name (PARDUMP)]
   +g[graphics: (0)-Postscript, 1-SVG
   -k[Kolor: (0)-B&W 1-Color]
   -j[map background file or {none} to turn off]
   -n[plot every Nth particle (1)]
   -o[output file base name (otherwise PYYMMDDHH.{ps|html})]
   -p[particle max value for color scaling]
   -s[size scaling fraction from default (1.0)]
   -t[particle age plot option not available]

Global particle plot using a cylindrical equidistant projection. If the output


file name is defined on the command line, multiple time periods will be
written to the same file; otherwise output will be one file per time period!

GUI: disp_part.tcl

parvplot
USAGE: parvplot -[options(default)]
   -i[input file name (PARDUMP)]
   +g[graphics: (0)-Postscript, 1-SVG
   -k[Kolor: (0)-B&W 1-Color]
   -m[scale output by mass 1-yes (0)-no]
   -s[select output species (1) or species id number]
   -n[plot every Nth particle (1)]
   -o[output file name (parvplot.ps)]
   -p[Process file name suffix: (ps) or process ID]
   -t[age interval plot in hours (0)]
   -z[Zoom factor: 0-least zoom, (50), 100-most zoom]

Plot the vertical mass cross-section from a PARDUMP file.

GUI: disp_part.tcl       


Tutorial: disp_part.sh
parxplot
USAGE: parxplot -[options(default)]
   -a[GIS output: (0)-none, 1-GENERATE, 3-kml]
   -i[input file name (PARDUMP)]
   +g[graphics: (0)-Postscript, 1-SVG
   -k[Kolor: (0)-B&W 1-Color]
   -m[scale output by mass 1-yes (0)-no]
   -s[select output species (1)-first specie, 0-sum all, or species id number]
   -n[plot every Nth particle (1)]
   -o[output file name (parxplot.ps)]
   -j[Map background file: (arlmap) or shapefiles.<(txt)|process suffix>]
   -p[Process file name suffix: (ps) or process ID]
   -t[age interval plot in hours (0)]
   -x[Force cross section: lat1,lon1,lat2,lon2]
   -z[Zoom factor: 0-least zoom, (50), 100-most zoom]

Plot the vertical cross-section through the plume center from a PARDUMP
file.

GUI: disp_part.tcl       


Tutorial: disp_part.sh

poleplot
Usage: poleplot [-options]
   -b[background map file name (arlmap)]
   -c[concentration data file name (cdump)]
   -g[graphics (0)=fill, 1=lines, 2=both]
   +g[graphics: (0)-Postscript, 1-SVG
   -l[lat/lon label interval in degrees (0.5)]
   -m[maximum concentration value set]
   -o[output file name (poleplot.ps)]
   -v[vector output (0)=no 1=yes]

Program plots HYSPLIT concentrations that were created on a polar grid


(cpack=3) rather than the default cartesian grid. Polar grids are defined in
terms distance and angle, where sector 1 starts from the north and sector 2 is
clockwise adjacent. Sectors are defined by arc-degrees and distance in
increments of kilometers.

GUI: pole_disp.tcl       


Tutorial: conc_pole.sh

scatter
USAGE: scatter -[options]
   -i{input file}
   +g{graphics: (0)=postscript 1=svg}
   -o{output file (scatter.{ps|html})}
   -p{plot minimum (-999=auto)}
   -s{symbol plot: (0)=station 1=record 2=plus 3=day 4=hour}
   -x{plot maximum (-999=auto)}

Plots a scatter diagram from the file dataA.txt of merged measured and
calculated values created by the statmain program. The default option plots
the sampling station ID number at the data point.

GUI: datem.tcl       


Tutorial: conc_stat.sh

showgrid
Usage: showgrid [-filename options]
   -D[input metdata directory name with ending /]
   -F[input metdata file name]
   +G[graphics: (0)-Postscript 1-SVG]
   -P[process ID number for output file]
   -I[grid point plotting increment (0=none)]
   -L[lat/lon label interval in degrees]
   -A[location of arlmap or shapefiles.<(txt)|processid>]
   -X[Read from standard input]
   -Q[subgrid lower left latitude (xxx.xx)]
   -R[subgrid lower left longitude (xxxx.xx)]
   -S[subgrid upper right latitude (xxx.xx)]
   -T[subgrid upper right longitude (xxxx.xx)]
   -B[plot symbol at each lat/lon in file SYMBPLT.CFG]
Output: showgrid.{ps|html}

Program will create a map of the full-domain of the meteorological data file
with + marks at each grid intersection matching the specified interval (-I). A
smaller map domain may be chosen. If desired a four-character symbol may
be plotted at the latitude-longitude points specified in the input file (-B)
which is in free-format (latitude longitude 4-characters).

GUI: disp_grid.tcl

stabplot
USAGE: stabplot -i -n -y
   -i[process ID number]
   +g[graphics: (0)-Postscript, 1-SVG]
   -n[sequential station number (1+2+...+n]
   -y[auto y axis log scaling]
Input: STABILITY.{processID}.txt
Output: stabplot.{processID}.{ps|html}

Plots a time series of Pasquill-Gifford stability categories at one or more


locations. Requires the output file STABILITY.TXT from the vmixing
program.
timeplot
USAGE: timeplot [-arguments]
   -i[input concentration file name]
   -s[secondary file name for display]
   -e[process ID number appended to output file]
   +g[graphics type: (0)=Postscript, 1=SVG]
   -c[concentration multiplier (1.0)]
   -m[minimum ordinate value (0.0)]
   -n[sequential station number (1+2+...+n]
   -p[0=draw only points, 1=no connecting line, 2=both]
   -x[maximum ordinate value (data)]
   -y[y axis log scaling (default=linear)]
   -z[three char time label (UTC)]
Output: timeplot.{ps|html}

Program to plot a time series of concentrations at one or more locations by


reading the output file from program con2stn. Supplemental (secondary)
data in DATEM format may also be included on the plot.

GUI: conc_stn.tcl       


Tutorial: conc_util.sh

trajplot
USAGE: trajplot -[options (default)]
   -i[Input files: name1+name2+... or +listfile or (tdump)]
   -o[Output file name: (trajplot.{ps|html})]
   +g[graphics: (0)-Postscript, 1-SVG
   and 14 additional options (see command line for details)

Basic trajectory plotting program for HYSPLIT formatted ASCII endpoint


files.

GUI: traj_disp.tcl       


Tutorial: traj_basic.sh

volcplot
USAGE: volcplot -[options (default)]
   -i[Input concentration file name: (cdump)]
   -o[Output file name: (volcplot.{ps|html})]
   +g[graphics: (0)-Postscript, 1-SVG
   and 20 additional options (see command line for details)

Uses the binary gridded concentration output file from HYSPLIT to


generate a visual ash cloud against a map background using the VAFTAD
display format.
Graphical Utilities
cat2svg
USAGE: cat2svg -[options (default)]
   -i[Input concatenated file (catsvg.html)]
   -o[Output file name: (svg.html)]
   -t[Truncate blanks at end of output text: 0-no (1)-yes]

When html files containing SVG graphics are concatenated into a single
file, the resulting file will have multiple <head> and <body> tags. This
program normalizes it by removing spurious <head> tags and merging
<body> tags.

catps2ps
USAGE: catps2ps -[options (default)]
   -i[Input concatenated file (catps.ps)]
   -o[Output file name: (postscript.ps)]
   -t[Truncate blanks at end of output text: 0-no (1)-yes]
   -m[Mac version delete save and restore]

Removes extraneous header information and updates the page count for
multiple frame Postscript files that have been created by appending
individual single-frame files.

GUI: hysplit.tcl

coversheet
Usage: coversheet [-options]
   -b[backtracking (0)=no, 1=yes]
   -i[input text file (RSMC.TXT)]
   -j[input date stamp file (COVER.TXT)]
   -o[output file (cover.ps)]

Creates a graphical coversheet from a text file that summarizes various


HYSPLIT run parameters. In a subsequent step, the simulation grapics are
appended to the coversheet. This particular version has been customized for
RSMC applications.
gelabel
Usage: gelabel [-options]
   -p[Process file name suffix (ps|html) or process ID]
   -4[Plot below threshold minimum for chemical output: (0)-no, 1-yes]
   +g[graphics: (0)-Postscript, 1-SVG
Input: label input file is GELABEL_{ps|html}.{ps|html}

Creates a small table like graphic to assocate contour colors with


concentration values. Results can be merged into Google Earth KML/KMZ
files.

GUI: conc_disp.tcl and conc_grid.tcl


      
Tutorial: disp_ge.sh

gen2xml
USAGE: gen2xml -[options (default)]
   -i[Input generate filename: (hysgen.txt)]
   -a[Input generate attributes filename: (hysgen.att)]
   -o[Output file name: (hysgen.xml)]

Converts ESRI generate format GIS files to XML files for use in Google
Map.

splitsvg
USAGE: splitsvg -[options (default)]
   -i[Input file (svg.html)]
   -o[Output file name: (split.svg)]
   -f[extract the First frame only: (0)-no 1-yes]

Create one SVG file per <svg> tag from an HTML file containing SVG
graphics. Note that the name of an output file is proceeded by the frame
number.

Tutorial: conc_disp.sh

stn2ge
USAGE: stn2ge [-options]
   -i[input file name output from con2stn]
   -s[selection station list file name]
   -o[output google earth file name (less .kml)]

Converts the concentration output from the con2stn program for the stations
in the list file to Google Earth (kml) format. The station list file contains
one or more records of station [ID latitude longitude].

GUI: conc_stn.tcl

HYSPLIT Configuration
dat2cntl
USAGE: dat2cntl [options]
   -c[numerator units conversion factor (1.0)]
   -d[emission rate set to the ...
     (0) = measured data value,
     1 = 1/measured data value,
     2 = value in the conversion factor field (-c)]
   -n[number variables on location line (3),4,5]
   -i[input measured data file name in datem format]
   -s[output station number as pollutant ID: no=0, yes=(1)]
   -t[index for trajectory start: 1,2,3,4,5,6,(7)]
     1 = ending of sample
     2 = middle of sample; 3 = end and middle
     4 = start of sample; 5 = end and start
    (7)= end, middle, start; 6 = middle and start
   -z[zero value (1.0E-12)

Converts a DATEM format measured data file into CONTROL.{variation}


by reading the base CONTROL file and the DATEM observational data
file. The CONTROL file should represent a forward calculation from the
source location and time to encompass the sampling data that is expected to
be impacted by the release. The reconfigured CONTROL files are
numbered sequentially and correspond to the suffix name of the binary
output file. The station ID is written in the pollutant ID field for single
pollutant concentration simulations and it is incorporated into the output file
name for trajectory calculations. Trajectory starts are at the beginning,
middle, and end of the sampling period. Concentration runs release particles
over the entire sampling period.

GUI: conc_geol.tcl and traj_geol.tcl


      
Tutorial: src_geol.sh

dustbdy
USAGE: dustbdy
Input: CONTROL and LANDUSE.ASC
Output: CONTROL

Generates multiple latitude-longitude source points from a CONTROL file


in which the first two latitude-longtitude points are considered to be the
corner points and the third point is the delta latitude-longtitude increment.
Each source point represents a PM10 dust emission point based upon the
land use category of desert.

GUI: hymenu.tcl       


Tutorial: cust_dust.sh

dustedit
USAGE: dustedit -[options (default)]
   -b[bowen ratio selection value (2.5)]
   -i[input control file name (dustconus_controlXX)]
   -m[meteorology file name (hourly_data.txt)]
   -o[output control file name (dustconus_control.txt)]

Selects locations from /nwprod/fix/dustconus_controlXX that meet certain


selection criteria based upon current meteorological conditions at each
potential dust emission location. There are three steps:

1. Find the maximum sensible heat flux from 1500-2100 UTC


2. Determine if the latent flux >= 5 and sensible flux >= 25 watts/m2
3. Use as an emission point if the Bowen ratio (S/L) >= 2.5

A new CONTROL file is written that is used in the dust script instead of
dustconus_controlXX. The Bowen ratio is used as a surrogate for wet or dry
conditions to turn off dust emissions where it has recently rained. Dust
emissions will restart once it has dried out as indicated by a Bowen ratio
above the threshold. This program is used as a pre-processor in the
operational dust forecast.

fires
Usage: fires
Usage: fires {bluesky directory}
Usage: fires -d{bluesky directory} -g{aggregation grid size}
Defaults: directory=./ grid_size=0.20

Generates multiple lat-lon source points from a control file in which the first
lat lon points are considered to be the corner point. Each source point
represents a forest fire smoke emission point based upon satellite analysis.

File Purpose
DOMAIN CONTROL file that specifies the fire domain
CONTROL CONTROL file modified with each emission location
BLUESKY.TXT Names BlueSky output files, lat, lon, area
BLUESKY.CSV Aggregated fire locations for BlueSky input
FIRES.ARC Yesterday's emissions on the aggregation grid
FIRES.NEW Emissions for today plus any decayed archive emissions
FIRES.TXT NESDIS HMS file of fire locations for today

The program is intended to be run in two modes. If no BlueSky emission


data are available (BLUESKY.TXT does not exist), then a default emission
scenario is assumed for each pixel location in FIRES.TXT. This pass will
create the BlueSky burns input file (BLUESKY.CSV). This file must then
be processed in the BlueSky framework to produce the specied emission file
for that location. On the second pass to this program, the BLUESKY.TXT
file is read to create a modified CONTROL file with emission locations,
rates, and burn areas. The script calling this program should delete all
BLUESKY.??? files prior to the first pass execution.

firew
Usage: fires
Usage: fires {bluesky directory}

Simpler version of fires for web applications.

goes2ems
Usage: goes2ems [start] [duration] [infile1] [infile2]
   [start] - initial HH hour or force YYYYMMDDHH
   [duration] - emission duration in hours
   [infile1] - first day GOES emission file
   [infile2] - second day GOES emission file

The program reads the GOES PM2.5 emissions file and converts it to the
HYSPLIT time-varying EMITIMES file format.

hysplit executables
USAGE: hy{c|t}{m|s}_{xxx} [optional suffix]
   {c|t} where c=concentration and t=trajectory
   {m|s} where m=multi-processor and s=single-processor
   {xxx} where xxx=compilation-variation
     std=standard version
     ens=integrated grid ensemble
     var=random initialization
     gem=global eulerian model
     ier=ozone steady state
     grs=ozone seven equation
     so2=sulfur dioxide wet chemistry
     cb4=carbon bond four chemistry
  [.suffix] - added to all standard model input and output files if found
Various HYSPLIT executables are created using different compilation
options from the same source code: hymodelc.F for air concentrations and
hymodelt.F for trajectories.

A more detailed discussion about the input and output file requirements for
the variations of the primary HYSPLIT executables can be found in the
model tutorial. A detailed description of the additional file requirements for
the various HYSPLIT chemistry versions (ier,grs,so2) can also be found on-
line.

hysptest
USAGE: hysptest [optional suffix]
  [.suffix] - added to all standard model input and output files if found

Test version of HYSPLIT that will evaluate the CONTROL and


SETUP.CFG files for consistency of options by actually running a single-
particle dummy calculation using the meteorological data. This program is
discussed in more detail in the model tutorial.

GUI: test_cfg.tcl       


Tutorial: test_inputs.sh

latlon
USAGE: dustbdy [-p{process ID suffix}]
Input: CONTROL
Output: CONTROL

Generates multiple latitude-longitude source points from a CONTROL file


in which the first point is considered to be the corner point and the third
point is the delta latitude-longitude increment. An input CONTROL file
requires exactly three starting locations:
  1: lower left corner of matrix grid (1,1)
  2: upper right corner of matrix grid (imax,jmax)
  3: location of grid point (2,2) adjacent to (1,1)

GUI: hymenu.tcl       


Tutorial: src_recp.sh

nuctree
Usage: nuctree -n [-options]
   -o[output activity file name {DAUGHTER.TXT}]
   -n[Parent nuclide name (ie, Cs-137, I-131)]
   -p[Process ID number (CONTROL.pid, CONTROL_DAUGHTER.pid)]
   -r[generate random initial seed]

This program will display the daughter nuclides produced by a parent


nuclide along with the half-life and branching fractions of each nuclide. It
will also read a CONTROL file and use the pollutant ID to determine the
daughter products. An additional CONTROL.DAUGHTER file will be
created with the daughter products information.

GUI: conc_daug.tcl

prntbdy
USAGE: prntbdy
   prompted standard input:
   1 - landuse.asc
   2 - rouglen.asc
   3 - terrain.asc
Input: Select Input File Number
Output: standard

Prints the contents of the various boundary files in the bdyfiles directory.

testnuc
Usage: testnuc
Input: none
Output: standard

This is a hardwired program to test subroutine nucdcy in the HYSPLIT


subroutine library which is used to calculate radioactive decay and the
resulting daughter products defined on the same particle. Documentation is
not available nor are there any known applications.

timeplus
Usage: timeplus year(I2) month(I2) day(I2) hour(I2) fhour(I4)

Prints a new date when given a date and forecast hour.

vmsmerge
USAGE: vmsmerge -[options(default)]
   -i[input base file name (VMSDIST).000]
   -o[output file name (VMSDIST)]

Merge multiple VMSDIST.XXX files into a single file by looping


sequentially 001->999 through vmsdist files to the first missing file. The
contents are merged into one file. These files are normally created by the
MPI version.

vmsread
Usage: vmsread [-filename options]
   -i[input file name (VMSDIST)]
Output: vmsdist.txt

Program to read the HYSPLIT vertical mass distribution file on native


levels and output results to a file at the same interval as in the MESSAGE
file.

zcoord
Usage: zcoord
   prompted standard input:
   30.0 -25.0 5.0

Program to show the HYSPLIT vertical coordinates based upon the values
of the three polynomial coefficients: AA BB CC. New values can be
entered after the program stops. There is no input prompt. The values used
in each HYSPLIT simulation can be found in the MESSAGE file line:
Internal grid parameters (nlvl,aa,bb,cc)

Meteorological Data to ARL Format


api2arl
Usage: api2arl [-options]
   -h[help information with extended discussion]
   -e[encoding configuration file {name | create arldata.cfg}]
   -d[decoding configuration file {name | create api2arl.cfg}]
   -i[input grib data file name {DATA.GRIB2}]
   -o[output data file name {DATA.ARL}]
   -g[model grid name (4 char) default = {center ID}]
   -s[sigma compute=1 or read=2 or ignore=(0)]
   -t[top pressure (hPa) or number {50} of levels from bottom]
   -a[averge vertical velocity no=(0) or yes=numpts radius]
   -z[zero fields (no=0 yes=1)initialization flag {1}]
The program converts model output data in GRIB2 format using the
ECMWF ecCodes library (formerly used the ECMWF grib_api library) to
the ARL packed format required for input to HYSPLIT. The data must be a
global latitude-longitude or regional Lambert grid defined on pressure
surfaces. The program will only convert one time period in any one input
file. Multiple time period output files can be created by appending each
time period processed using the cat command.
v1 : Pressure level data from the NCEP NOMADS server
v2 : Hybrid sigma coordinate; index increases with Z
v3 : NOAA pressure and ECMWF hybrid ½o data; index decreases with Z
v4 : GSD HRRR data on pressure level surfaces

apidump
Usage: apidump
Input: data.grib2
Output: standard

Decodes a GRIB2 message, providing a simply inventory of the contents.


There are no command line options and the GRIB message must be in a file
named data.grib2.

arw2arl
USAGE-1: arw2arl [netcdf data file name]
USAGE-2: arw2arl -[options (default)]
   -b[beginning time period index (1)]
   -e[ending time period index (9999)]
   -t[time interval in minutes between outputs (60.0)]
   -s[create WRF variable namelist file for instantaneous winds]
   -a[create WRF variable namelist file for average fluxes]
   -k[create WRF variable namelist file for tke fields]
   -c[create and run with namelist file (1)=inst, 2=avrg flux, 3=tke]
   -d[diagnostic output turned on]
   -i[input netcdf data file name (WRFOUT.NC)]
   -o[output ARL packed file name (ARLDATA.BIN)]
   -p[packing configuration file name (ARLDATA.CFG)]
   -v[variable namelist file name (WRFDATA.CFG)]
   -n[number of levels to extract from sfc (50)]
   -z[zero initialization each time 0=no (1)=yes]

Advanced Research WRF to ARL format converts ARW NetCDF files to a


HYSPLIT compatible format. The WRFDATA.CFG namelist file can be
manually edited to select other variables to output. X variables are not
found in the WRF output but are created by this program. All variables
require a units conversion factor be defined in the namelist file. When the
input file contains data for a single time period, then the ARL format output
file from each execution should be appended (cat >>) to the output file from
the previous time periods execution.

GUI: arch_arw.tcl

content
Usage: content
Input: GRIB1 file name
Output: standard

Dump the contents, by section, of a GRIB1 file

drn2arl
USAGE: drn2arl [setup file]
Input: DRONE_SETUP.CFG

Program to convert meterological drone data for ONE vertical profile to


standard packed form for input into other ARL programs. See the associated
README file for more detailed instructions.

grib2arl
Usage: grib2arl [-options]
   -[14 options, see source code]
This program is an orphaned application as it has been moved to the
~/data2arl/legacy directory and pre-compiled versions are no longer
available.

GUI: arch_{era|ecm}.tcl

narr2arl
Usage: narr2arl [file_1] [file_2] [...]

Converts an North American Regional Reanalysis (NARR) model pressure


GRIB1 file to the ARL packed format. Processes only one time period per
execution. Multiple input files may be specified on the command line, each
containing different variables required for the specific time period. Input
data are assumed to be already on a conformal map projection on model
levels. However, wind vectors need to be rotated from true to the projection.
sfc2arl
Usage: sfc2arl [filein] [fileout] [clat] [clon] [optional process ID number]

Creates a gridded meteorological file for multiple time periods in ARL


packed format from an ASCII input file of surface observations. For
example:

YY MM DD HH MM MSLP SFCP T02M RH2M MIXD WDIR WSPD PASQ SHGT


07 09 04 11 00 1007 0937 0021 0079 552.0 270 5.0 4 00175
07 09 04 12 00 1008 0938 0022 0065 745.0 300 6.0 4 00175

The output data grid is compiled for 25 by 25 grid points at 10 km


resolution and it is centered about the command line lat-lon position. This
program is very similar to stn2arl except the input file contains the
additional variables, SFCP, T02M, RH2M, SHGT, and the vertical
coordinate of the output grid is sigma rather than absolute pressure units.

snd2arl
Usage: snd2arl [filein] [fileout] [clat] [clon] [mixing] [yymmddhh] ...
    [optional process ID number]

Creates one time period ARL packed format meteorological file from a
single rawinsonde over a 50 x 50 (10 km) grid and where clat-clon is the
grid center and mixing is an integer from 1 to 7 defining the turbulent
mixing from convective (1), neutral(4), to very stable (7). The input file
should be free format with the following information (without the label
record):

Pressure Temp DewPt Height Direction Speed


1007.00 27.40 18.40 93.00 150.00 4.63
1000.00 26.60 17.60 146.00 151.60 4.55
925.00 20.20 16.10 828.00 169.54 3.63
891.00 17.40 15.30 1150.44 178.16 3.18
883.89 17.20 15.25 1219.00 180.00 3.09

stn2arl
Usage: stn2arl [filein] [fileout] [clat] [clon] [optional process ID number]

Creates a gridded meteorological file for multiple time periods in ARL


packed format from an ASCII input file of surface observations. For
example:

YY MM DD HH MM WDIR WSPD MIXD PASQ


07 09 04 11 00 270 5.0 1500.0 4
07 09 04 12 00 280 5.0 1500.0 4
The output data grid is compiled for 25 by 25 grid points at 10 km
resolution and it is centered about the command line lat-lon position. This
program is very similar to sfc2arl except the input file contains only the
minimum number of variables.

GUI: arch_user.tcl       


Tutorial: meteo_enter.sh

Meteorological Data Editing


add_data
Usage: add_data [station data file name] [ARL gridded file name] ...
   followed by the optional parameters {none= (default)}:
   -d to turn on diagnostic output
   -g{#}, where # is the grid point scan radius (9999)
   -t{#}, where # is (1) for blending temperature or 0 for no blend
   -z{#}, vertical interpolation: (0)=PBL, 1=all, 2=above PBL

Edits the packed meteo data file based upon selected observations in the
station data input file. The station file contains wind direction and speed at
times, locations, and heights (any order). Observations are matched with the
gridded data interpolated to the same location. Gridded winds are then
adjusted in direction and speed to match the observation. Those winds are
then inter- polated (using 1/r^2) back into the gridded data domain in the
mixed layer. The program is not intended to be a replacement for data
assimilation but a quick way to adjust the initial transport direction to match
local observations near the computational starting point. The optional -g#
parameter is used to limit the radius of influence of the 1/r^2 weighting to #
grid points in a radial direction from each of the observation locations. The
default (9999) is to use the whole grid. Vertical interpolation defaults to
mixed layer only, but can be set to do the entire profile, or it can just
process the levels above the mixed layer.

Sample ASCII station observation input file (missing = -1.0)


-------- required ----------------------- ---- optional ---
YY MM DD HH MM LAT LON HGT DIR SPD TEMP Up2 Vp2 Wp2
95 10 16 00 00 39.0 -77.0 10.0 120.0 15.0 270.0 1.0 1.0 0.5
95 10 16 02 00 39.0 -77.0 10.0 120.0 15.0 270.0 1.0 1.0 0.5
95 10 16 04 00 39.0 -77.0 10.0 120.0 15.0 270.0 1.0 1.0 0.5

add_grid
Usage: add_grid
   prompted standard input:
   line 1 - /meteorological/data/directory/
   line 2 - ARLformat_datafile_name
Output: addgrid.bin

Program to interpolate an existing meteorological data file to a finer spatial


resolution grid at integer intervals of the original grid spacing. The program
should be run before add_data to insure that the resolution of the
meteorological data file is similar to that of the observations. This
combination of programs was used to merge WRF and DCNet data for
dispersion model calculations.

add_miss
Usage: add_miss
   prompted standard input:
   line 1 - /meteorological/data/directory/
   line 2 - ARLformat_datafile_name
Output: addmiss.bin

Examines a file for whole missing time periods (records missing) and
creates a new output file with interpolated data inserted into the proper
locations. The first two time periods in the file cannot be missing. The
missing data records must already exist in the file to use this program. Files
with DIFF records will not be interpolated correctly.

add_time
Usage: add_time
   prompted standard input:
   line 1 - /meteorological/data/directory/
   line 2 - ARLformat_datafile_name
   line 3 - The output frequency in minutes
Output: addtime.bin

Program to interpolate additional time periods into a meteorological data


file. The new data output interval should be an even time multiple of the
existing data file. Options are set through standard input. Linear
interpolation is used to create the data for the output time periods that fall
between the time periods of the input data file. Files with DIFF records will
not be interpolated correctly.

add_velv
Usage: add_velv
   prompted standard input:
   line 1 - /meteorological/data/directory/
   line 2 - ARLformat_datafile_name
   line 3 - Lower left corner of output grid (lat,lon)
   line 4 - Upper right corner of output grid (lat,lon)
   line 5 - Number of vertical levels including surface
Output: addvelv.bin

xtracts a subgrid from a larger domain file and adds additional 3 records f
r the velocity variances: u^2, v^2, and w^2. These are coputed from the TK
field. The program requires the input met orological file to contain the tur
ulent kinetic energy field. The utput file may then used as an input file for
ther programs that ill add observational data. The prgram can be used t
o create a meteorological data file with velocity variances over a smaller
domain where observational variance data (i.e. DCNet) can be assimilated
using add_data.

aer2arl
Usage: aer2arl
   prompted standard input:
   line 1 - /meteorological/data/directory/
   line 2 - ARLformat_datafile_name
   line 3 - Meteorological model top pressure (hPa)
Output: outfile.arl

Reads in an AER WRF formatted ARL file and writes out an ARL file with
the new AWRF header and vertically staggered meteorological variables
repositioned accordingly.

arl2meds
Usage: arl2meds
   prompted standard input:
   line 1 - /meteorological/data/directory/
   line 2 - ARLformat_datafile_name
Output: meds.fix

Converts one ARL SHGT (terrain height) data record to MEDS format.
This file may be required by the meds2arl converter if the surface terrain
(ST) variable is not included within the MEDS data file. Note that the data
may need to be realigned with either the prime meridian or dateline to
match the alignment of the MEDS data. The MEDS data alignment is
written to the meds.txt diagnostic message output file by the meds2arl
program. Therefore it may take one or two iterations of arl2meds and
meds2arl to get the proper command line input parameters.

This program is an orphaned application as the primary converter program


meds2arl has been moved to the ~/data2arl/legacy directory.

data_avrg
Usage: data_avrg
   prompted standard input:
   line 1 - Grid point filter (delta: i,j)
   line 2 - /meteorological/data/directory/
   line 3 - ARLformat_datafile_name
Output: average.bin

Averages ARL format meteorological data according to input grid point


filter options, such that each value is replaced by the average value of all
grid points within the rectangular area ±i and ±j. Files with DIFF records
will not be averaged correctly.

data_del
Usage: data_del
   prompted standard input:
   line 1 - Variable to remove (4-char ID)
   line 2 - /meteorological/data/directory/
   line 3 - ARLformat_datafile_name
Output: clean.bin

This program deletes a variable from the ARL formatted meteorogical file.

edit_flux
Usage: edit_flux
   prompted standard input:
   line 1 - /meteorological/data/directory/
   line 2 - ARLformat_datafile_name
   line 3 - the new roughness length (m)
Output: editflux.bin

Edits the all flux fields based upon a pre-determined roughness length.
Using U = U* k / ln(Z/Zo) with the original Zo and a new equation with a
modified Zo^ that represents the new larger roughness length, take the ratio
of the two equations such that U*^/U* = ln(Z/Zo)/ln(Z/Zo^). For
computational purposes, assume that Z is always one meter greater than
Zo^. Then the the momentum flux fields are multiplied by this ratio, while
T* is divided by the ratio.
edit_head
Usage: edit_head
   prompted standard input:
   line 1 - /meteorological/data/directory/
   line 2 - ARLformat_datafile_name
   line 3 - Incorrect time (YY MM DD HH FH)
   line 4 - Correct time (YY MM DD HH FH)

Edits the 50 byte ASCII header of each data record of a pre-existing


meteorological data file. The program could be recompiled to perform other
edits besides changing incorrect time labels.

edit_index
Usage: edit_index
   prompted standard input:
   line 1 - /meteorological/data/directory/
   line 2 - ARLformat_datafile_name

Edit the extended header (to byte 108) for each index record of an existing
meteorological data file. The program needs to be modified and then
recompiled to customize the edit for each problem problem encountered.

edit_miss
Usage: edit_miss
   prompted standard input:
   line 1 - maximum number of missing periods permitted
   line 2 - just list missing (0) or write into data file (1)
   line 3 - /meteorological/data/directory/
   line 4 - ARLformat_datafile_name
   line 5 - processing start day,hr

Program edit_miss is used to interpolate missing variables in an existing


meteorological data file from adjacent time periods. The missing data must
already exist in the file as valid records with either a blank or missing code
in the field. For files with missing records, use program add_miss. Files
with DIFF records will not be interpolated correctly.

edit_null
Usage: edit_null
   prompted standard input:
   line 1 - Data grid size (nxp, nyp) for all files
   line 2 - Data set name (with NULL field ID)
   line 3 - Output data set name with merged data
   line 4 - Data set name with the one-field to replace NULL

This program replaces one record per time period in a file where the
missing record is identified by NULL. The new field is read from another
file than only contains the one variable. The records are matched by time.
For example, the program can be used to insert precipitation records into a
file. Some editing and recompilation may be required.

file_copy
Usage: file_copy [file1] [file2]

Appends file1 to the end of file2. This program can be used in the event that
the type file1>>file2 in DOS or cat file1>>file2 in UNIX cannot be used.

GUI: arch_ecm.tcl

file_merge
Usage: file_merge
   prompted standard input:
   line 1 - /meteorological/data/directory/
   line 2 - ARLformat_file#1_name
   line 3 - /meteorological/data/directory/
   line 4 - ARLformat_file#2_name

Merges the data records from file #1 into file #2 by replacing the records in
file #2 with the same time stamp as those in file #1. If data times in file #1
go beyond the end of file #2, those records are appended to file #2.

pole2merc
Usage: pole2merc
   prompted standard input:
   line 1 - /meteorological/data/directory/
   line 2 - ARLformat_northern_hemisphere_name
   line 3 - /meteorological/data/directory/
   line 4 - ARLformat_southern_hemisphere_name
   line 5 - Lower left corner (lat,lon) of Mercator grid
   line 6 - Upper right corner (lat,lon) of Mercator grid
Output: mercator.bin

Merges two meteorology files, a northern hemisphere and southern


hemisphere polar stereographic projection, to single Mercator projection
output file. The polar grids need to be identical in grid size.

rec_copy
Usage: rec_copy
   prompted standard input:
   line 1 - /meteorological/FROMdata/directory/
   line 2 - ARLformat_copyFROM_datafile_name
   line 3 - /meteorological/TOdata/directory/
   line 4 - ARLformat_copyTO_datafile_name
   line 5 - Relative to start time record numbers copied (start,stop)
   line 6 - Start copy at (day, hour)

Copies a range of records from file #1 to file #2 starting at the time


specified. Both meteorological grids need to be identical. The records to be
copied are specified by the range of record numbers relative to the record
number of the starting time. Records in the destination file are replaced
regardless of whether they match in terms of content.

rec_insert
Usage: rec_insert
   prompted standard input:
   line 1 - the data grid size (nxp, nyp)
   line 2 - the OLD data set name
   line 3 - the NEW data set name
   line 4 - the special MERGE data set name

The program is designed to be customized and recompiled for each


application. The current version copies the records from the OLD
meteorological data file into the NEW file. During the copy it tests for the
NULL variable ID in the OLD meteorological data file and replaces that
record with the equivalent record number in the special MERGE data file.

rec_merge
Usage: rec_merge
   prompted standard input:
   line 1 - the data definition file (METDATA.CFG)
   line 2 - the OLD data set name
   line 3 - the NEW data set name
   line 4 - the special MERGE data set name

The program merges in an additional data file (MERGE) with one record
per time period, reading an OLD archive format data file (one without the
index record) to the NEW style format as defined by data definition file.
Information about the original purpose of this program is no longer
available.

xtrct_grid
Usage: xtrct_grid [optional arguments] and prompted input
   -h [shows this help display]
   -p [add process ID to the file names]
   -g [use grid points to define the subgrid]
   -n [number of time periods to average (0)]
   Prompted standard input:
   line 1 - /meteorological/data/directory/
   line 2 - ARLformat_datafile_name
   line 3 - Lower left corner (lat,lon or i,j if -g set)
   line 4 - Upper right corner (lat,lon or i,j if -g set)
   line 5 - Number of data levels (including sfc)
Output: extract.bin

Extracts a subgrid from a larger domain meteorological file in ARL format.


The subgrid is selected through the lower left and upper right corners either
by latitude-longitude or grid point value if the -g command line option is
invoked. Note that subgrids cannot be smaller than the minimum record
length required to write an INDX record. The length of the index record is
determined by the number of variables and levels.

xtrct_time
Usage: -p[process_id] -s[skip DDHH] -d[delete] -h[help]
   Prompted standard input:
   line 1 - /meteorological/data/directory/
   line 2 - ARLformat_datafile_name
   line 3 - starting day, hour, min for extract
   line 4 - ending day, hour, min for the extract
   line 5 - skip time period interval (0=none 1=every other)
   line 6 - start and stop record numbers (over-ride or accept default)
Output: extract.bin

Program to extract a selected number of time periods from a ARL format


meteorological data file. The (-d) delete option can be used to change the
program to delete rather than extract the selected time period. The skip (-s)
option only skips duplicate time periods matching the day hour (DDHH)
specified.
Meteorological Data Examination
arl2grad
Usage: arl2grad {/meteo_dir_name/} {arl_filename} {output file name}
If the output filename is missing, then it is set to MODEL_ID.grd

Converts ARL packed meteorological data to Grads format

chk_data
Usage: chk_data
   prompted standard input:
   line 1 - /meteorological/data/directory/
   line 2 - ARLformat_datafile_name
Output: standard

A simple program to dump the first and last four elements of the data array
for each record (i.e. variable) of an ARL packed meteorological file. It is
used primarily for diagnostic testing.

chk_file
Usage: chk_file [-i{nteractive} -s{hort} -f{file} -t{ime}]
   -i{nteractive} prompted standard input:
     line 1 - /meteorological/data/directory/
     line 2 - ARLformat_datafile_name
     line + - Enter to continue ...
   -s{hort} only index record information output
   -f{file} output to CHKFILE{1|2|3} otherwise to standard output
   -t{ime} only writes the first and last record of each time period

Program to dump the 50 byte ascii header on each record in addition to a


summary of the information contained in the index record.

chk_index
Usage: chk_index
   prompted standard input:
   line 1 - /meteorological/data/directory/
   line 2 - ARLformat_datafile_name
   line + - Enter to continue when changes noted
Output: standard

Check the extended header (bytes 1:108) for each meteorological index
record to insure that they are consistent for all time periods in the data file.
There is one index record before all of the data records each time period.

chk_rec
Usage: chk_rec
   prompted standard input:
   line 1 - /meteorological/data/directory/
   line 2 - ARLformat_datafile_name
Output: standard

Elementary program to dump the 50 byte ASCII header for each record

chk_times
Usage: chk_times
   prompted standard input:
   line 1 - /meteorological/data/directory/
   line 2 - ARLformat_datafile_name
Output: standard

Program to list all the time periods in a file and checks to that the number of
records per time period are consistent. The maximum forecast hour is also
returned.

data_year
Usage: data_year
Input: none (all meteorological files to be opened are hardcoded)
Output: AT{latitude}{longitude}

The program will create annual averages of a variable (temperature) at


several pre-designated latitude-longitude positions using the 2.5° global
reanalysis data. Directory, file names, and all other options are hardcoded.

datecol
Usage: datecol {meteorology file#1} {meteorology file#2} {...#n}
Output: standard
Prints the dates from an ARL format meteorology file to standard output as
one line each for years, months, days, and hours. This program is used in
several web applications.

datesmry
Usage: datesmry {output from program datecol}
Output: standard

Tabulates a set of dates from program datecol into bins for web use.

filedates
Usage: filedates [input file]

Lists the [YY MM DD HH FF] as a series of records to standard output


showing all of the time periods in an ARL formatted meteorological data
file. Used in several web applications.

findgrib
Usage: findgrib
   prompted standard input:
   line 1 - /GRIB1/directory/file_name
Output: standard

Finds the starting GRIB string at the beginning of each GRIB1 record and
determines the record length between GRIB1 records from the byte count
and the actual record length encoded in the binary data.

gridxy2ll
Usage: gridxy2ll [-filename options (in CAPS only)]
   -D[input metdata directory name with ending /]
   -F[input metdata file name]
   -P[process ID number for output file]
   -X[x point]
   -Y[y point]
   -W[image width]
   -H[image height]
Returns the latitude-longitde position of the upper-right corner point in
meteorological grid units given the lower-left position and delta-width and -
height in grid units.

inventory
Usage: inventory
Input: GRIB1 file name
Output: standard

Produces and inventory listing of all the records in GRIB1 file

metdates
Usage: metdates [input file]

Print the dates from an ARL formatted meteorology file. The program is
similar in function to filedates and one or both of these are used for web
applications.

metlatlon
Usage: metlatlon [directory filename]

Prints a text listing of the latitude-longitude associated with each grid point
in an ARL formattted meteorological data file.

metpoint
Usage: metpoint [directory filename latitude longitude]

Determines if a location is within the domain of an ARL formattted


meteorological data file. The program command line contains latitude,
longitude, file name, and directory, and returns the i,j of the position.
Negative values are outside the grid.

profile
Usage: profile [-options]
   -d[Input metdata directory name with ending /]
   -f[input metdata file name]
   -y[Latitude]
   -x[Longitude]
   -o[Output time offset (hrs)]
   -t[Output time interval (hrs)]
   -n[Hours after start time to stop output (hrs))]
   -p[process ID number for output text file]
   -e[extra digit in output values (0)-no,1-yes]

Extracts the meteorological profile at the selected location with the values
always written to a file called profile.txt, which may be appended by a
process ID. Without the -t option set, only the first time period will be
extracted. Use -o to start at a time after the first time period.

GUI: disp_prof.tcl       


Tutorial: meteo_prof.sh

unpacker
Usage: unpacker
Input prompted on the command line: Line 1 - GRIB1 file name
Output: standard

Decodes each section and all variables in a GRIB1 meteorological file.

velvar
Usage: velvar
Input: CONTROL
Output: velvar.txt

Creates a time series of velocity variance and diagnostic values using the
meteorological data and location specified in a standard HYSPLIT
CONTROL file.

vmixing
USAGE: vmixing (optional arguments)
   -p[process ID]
   -s[KBLS - stability method (1=default)]
   -t[KBLT - PBL mixing scheme (2=default)]
   -d[KMIXD - Mixing height scheme (0=default)]
   -l[KMIX0 - Min Mixing height (50=default)]
   -a[CAMEO variables (0[default]=No, 1=Yes, 2=Yes + Wind Direction]
   -m[TKEMIN - minimum TKE limit for KBLT=3 (0.001=default)]
   -w[an extra file for turbulent velocity variance (0[default]=No,1=Yes)]
Output: STABILITY{processID}.TXT

Creates a time series of meteorological stability parameters.

xtrct_stn
Usage: xtrct_stn [-i{nteractive} -f{ile of lat-lons} -r{rotate winds to true}]
   Prompted standard input:
   line 1 - /meteorological/data/directory/
   line 2 - ARLformat_datafile_name
   line 3 - Number of variables to extract, for each variable ...
   line 4 -     CHAR_ID(a4) level#(i2) Units_multiplier(f10)
   line 5 - Position of data extraction (lat,lon) when NOT -f
   line 6 - Interpolation as Nearest Neighbor (0) or Linear (1)
   line 7 - Output file name
   line 8 - Initial output record number (to append to an existing file)
Output: ASCII; records grouped by time period, then station, with all
variables on one line

Creates a time series of meteorological variables interpolated to a specific


latitude-longitude point. Multiple variables may be defined, each by their 4-
character ID. To enhance the interactive mode, prompting for each input
line can be turned on (-i). If the file of latitude-longitude points is defined (-
f) then the position of the extraction point is not required. The first record of
the lat-lon file is the number of lines to follow.

Particle Utilities
asc2par
USAGE: asc2par -[options(default)]
   -i[input file name (PARDUMP.txt)]
   -o[output file name (PARDUMP.bin)]

Converts the ASCII formatted particle dump file, created by the program
par2asc to a big-endian binary file. Each output time is composed of a
header record indicating the number of particles to follow for that time
period:
Header Record: NUMPAR,NUMPOL,IYR,IMO,IDA,IHR
   Each particle output is composed of three data records:
   Record 1: MASS(1:NUMPOL)
   Record 2: TLAT,TLON,ZLVL,SIGH,SIGW,SIGV
   Record 3: PAGE,HDWP,PTYP,PGRD,NSORT
par2asc
USAGE: par2asc -[options(default)]
   -i[input file name (PARDUMP)]
   -o[output file name (PARDUMP.txt)]
   -v[inventory date/times (PARINV)]
   -a[optional one-line-per-particle output that
     can be imported into GIS:(0)-none ; 1-PAR_GIS.txt created]

Converts the particle dump output file (big-endian binary) to an ASCII-


formatted particle file, and as an option, output an inventory file listing the
date/time(s) of the output. Each output time of the ASCII-formatted particle
file is compose of a header record indicating the number of particles to
follow for that time period. With the optional -a1 text file option,
PAR_GIS.txt is created, where each particle has one line, and the following
is written: header record + one record for each particle.

par2conc
USAGE: par2conc -[options(default)]
   -i[input file name (PARDUMP)]

Converts the particle dump output file (big-endian binary) to a HYSPLIT-


formatted concentration file using a CONTROL file to determine the grid
and averaging times.

parmerge
USAGE: parmerge -[options(default)]
   -i[input base file name (PARDUMP).000]
   -o[output file name (PARDUMP)]

Merges multiple PARDUMP.XXX files into a single file. These files are
normally created by the MPI version. The program loops sequentially 001
through 999 for any existing PARDUMP files and merges the contents into
one file. The program stops at the first missing file.

paro2n
USAGE: paro2n -[options(default)]
   -i[input file name (PARDUMP)]
   -o[output file name (PARDUMP.NEW)]
Converts the {o}ld particle dump output file (big-endian binary) to the
{n}ew format that includes the minutes field.

parshift
USAGE: parshift -[options (default)]
   -b[blend shift outside of the window]
   -d[delete particles instead of shift/rotate (0)-all #-specie]
   -i[input base file name (PARDUMP)]
   -o[output base file name (PARINIT)]
   -r[rotation degrees:kilometers:latitude:longitude]
   -s[search for multiple files with .000 suffix]
   -t[time MMDDHHMN (missing process first time only)]
   -w[window corner lat1:lon1:lat2:lon2 (-90.0:-180.0:90.0:180.0)]
   -x[shift delta longitude (0.0)]
   -y[shift delta latitude (0.0)]

The program provides for the spatial adjustment of the particle positions in
the HYSPLIT binary particle position output file as specified on the
command line. One or more files may be processed by a single execution
with the adjusted output always written to a new file name. The position
adjustment is only applied to one time period if the input file contains
multiple time periods. The shift is specified in either delta latitude-longitude
units. Unless a latitude-longitude window is specified, all particle positions
in the fil are adjusted. Adjustments outside of the window may be linearly
blended to zero adjustment at a distance of two windows. Adjustments may
also be specified as a rotation and distance from a point. Particles in a
window may also be deleted.

GUI: par_shift.tcl       


Tutorial: cust_toms.sh

stn2par
USAGE: stn2par -[options(default)]
   -i[input file name (meas-t1.txt)]
   -n[number of grid pts (50)]
   -g[grid interpolation (0.5) deg (0=stn)]
   -o[output file name (PARINIT)]
   -p[particle=(0) or puff=1 distribution]
   -r[turns on random initial seed number]
   -s[split factor (10) per station]
   -t[temporal output interval (24) hrs]
   -z[height distribution (3000) meters]

Converts measured data in DATEM format to a multi-time period


PARDUMP file which can be used for HYSPLIT initialization or display
applications.
Shapefile Manipulation
ascii2shp
Usage: ascii2shp [options] outfile type < infile
   reads stdin and creates outfile.shp, outfile.shx and outfile.dbf
   type must be one of these: points lines polygons
   infile must be in 'generate' format
   Options:
      -i Place integer value id in .dbf file (default)
      -d Place double precision id in .dbf file

Converts trajectory endpoints and concentration contours in GENERATE


format to ESRI formatted shape files.
ascii2shp version 0.3.0
Copyright (C) 1999 by Jan-Oliver Wagner
The GNU GENERAL PUBLIC LICENSE applies. Absolutly No Warranty!
ARL ascii2shp version 1.1.10

GUI: asc2shp.tcl       


Tutorial: disp_shp.sh

dbf2txt
USAGE: dbf2txt [-d delimiter] [-v] dbf-file

Extracts the text information from a database file (.dbf).

txt2dbf
Usage: txt2dbf [{-Cn | -In | -Rn.d}] [-d delimiter] [-v] txt-file dbf-file

Converts tab-delimited ASCII-Tables in the dbase-III-Format. The table


structure is defined on the command line:
  -Cn [text, n characters]
  -In [integer, with n digits]
  -Rn.d [real, with n digits (including '.') and d decimals]

GUI: asc2shp.tcl       


Tutorial: disp_shp.sh
Trajectory Analysis
clusend
USAGE: clusend - [ options (default)]
   -a[max # of clusters: #, (10)]
   -i[input file (DELPCT)]
   -n[min # of clusters: #, (3)]
   -o[output file (CLUSEND)]
   -t[min # of trajectories: #, (30)]
   -p[min % change in TSV difference from one step to next: %, (30)]

Scans the DELPCT output file from the cluster program, locating the first
"break" in the data. "Break" is defined as an ICRIT% increase in TSV.

GUI: trajclus_run.tcl      
Tutorial: traj_clus.sh

cluslist
USAGE: cluslist - [ options (default)]
   -i[input file (CLUSTER)]
   -n[number of clusters: #, (-9-missing)]
   -o[output file (CLUSLIST)]

Lists trajectories in each cluster per given number of clusters

GUI: trajclus_run.tcl      
Tutorial: traj_clus.sh

clusmem
USAGE: clusmem - [ options (default)]
   -i[input file (CLUSLIST)]
   -o[output file (TRAJ.INP.Cxx)]
Input: CLUSLIST
Output: TRAJ.INP.C{i}, where i=cluster_number

Creates a file listing the trajectories in each cluster for input to programs
trajplot or trajmean.

GUI: trajclus_run.tcl      
Tutorial: traj_clus.sh

clusplot
USAGE: clusplot -[options (default)]
   -i[Input files: name (DELPCT)]
   -l[Label with no spaces (none)]
   -o[Output file name: (clusplot.ps)]
   -p[Process file name suffix: (ps) or label]
Input: DELPCT

Plots TSV data from DELPCT file created by cluster program. One can
infer the final number(s) of clusters from the plot.

GUI: trajclus_run.tcl      
Tutorial: traj_clus.sh

cluster
Usage: cluster
   prompted standard input:
   line 1 - Hours to do clustering
   line 2 - Endpoint interval to use
   line 3 - Skip trajectory interval
   line 4 - Cluster output directory
   line 5 - Map (0)-Auto 1-Polar 2-Lambert 3-Merc 4-CylEqu
Input: INFILE
Output: TCLUS, TNOCLUS, DELPCT, CLUSTER, CLUSTERno

Program starts with N trajectories (clusters) and ends with one cluster. MC
is the current number of clusters. Program clusend identifies stop point.
Cluster pairs are chosen based on total spatial variance (TSV - the sum of
the within-group sum of squares). The two clusters to be paired result in the
minimum increase in TSV.

GUI: trajclus_run.tcl      
Tutorial: traj_clus.sh

merglist
USAGE: merglist -[options (default)]
   -i[Input files: name1+name2+... or +listfile or (tdump)]
   -o[Output file base name: (mdump)]
   -p[Process file name suffix: (tdump) or process ID]

Program to merge HYSPLIT trajectory endpoints (tdump) files where the


list of input files contains one trajectory per file which are then merged into
a single output file. Although this program is usually associated with
clustering, it can be used in any trajectory application.

GUI: trajclus_run.tcl      
Tutorial: traj_clus.sh

trajfind
Usage: trajfind [in_file] [out_file] [lat] [lon]

Processes multiple trajectories resulting from the time-height split option


contained in [in_file] to extract a single trajectory to [out_file] that passes
nearest to the selected latitude-longitude given on the command line. This
program is primarily designed to determine optimal balloon trajectories
showing a time sequence of balloon heights required to reach the final
position.

trajfreq
USAGE: trajfreq -[options (default)]
   -f[frequency file name (tfreq.bin)]
   -g[grid size in degrees (1.0)]
   -i[input file of file names (INFILE)]
   -h[number of hours to include in analysis (9999)]
   -r[residence time (0),1,2,3]:
     (0) = no
     1 = yes; divide endpoint counts by number of trajectories
     2 = yes; divide endpoint counts by number of endpoints
     3 = yes; divide endpoint counts by max count number for any grid cell
   -c[include only files with same length as first endpoint file]
     (0) = no
     1 = yes
   -s[select bottom:top (0:99999) m AGL]
   -b[YYMMDDHHNNFF - force begin date label (first file)]
   -e[YYMMDDHHNNFF - force end date label (last file)]
   -a[ascii2shp shapefile input file(0) or 1]
   -k[min longitude for ascii2shp shapefile input file]
   -l[max longitude for ascii2shp shapefile input file]
   -m[min latitude for ascii2shp shapefile input file]
   -n[max latitude for ascii2shp shapefile input file]

Converts multiple trajectory input files into a concentration file that


represents trajectory frequencies.

GUI: traj_freq.tcl      
Tutorial: traj_freq.sh

trajfrmt
Usage: trajfmt [file1] [file2]

Reads a trajectory endpoints file1, reformats, and writes file2. The program
is designed to be a template for editing trajectory files. The code should be
customized and recompiled for the intended problem. The current data
record format follows: (8I6,F8.1,2F9.3,11(1X,F8.1)

trajgrad
Usage: trajgrad [file name]
Output:grads.bin

Trajectory model output data in ASCII format is converted to GrADS


format.

trajmean
USAGE: trajplot -[options (default)]
   -i[Input files: name1+name2+... or +listfile or (tdump)]
   -m[Map projection: (0)-Auto 1-Polar 2-Lamb 3-Merc 4-CylEqu]
   -o[Output file name: (tmean)]
   -p[Process file name suffix: (ps) or process ID]
   -v[Vertical: 0-pressure (1)-agl]

Calculates the mean trajectory given a set of trajectories; the file input
section option follows trajplot; the output is the standard tdump-format file.

GUI: trajclus_run.tcl      
Tutorial: traj_clus.sh

trajmerg
Usage: trajmerg [file1] [file2] [file3]

Merges trajectories in file1 and file2 into file3.


Advanced / Special Topics / Verification Statistics
See the DATEM (Data Archive of Tracer Experiments and Meteorology) web page for more detailed information on
each of these tracer experiments. The model verification statistics are presented for the un-averaged data, where each
measured and calculated concentration is paired in space and time. Zero-zero pairs are always excluded. Results are also
presented after temporally averaging the measured and calculated values over the duration of each experiment. These
latter results are paired in space but not time and hence represent the calculation's spatial performance. Statistics
presented are the correlation coefficient (R), the fractional bias (FB), the figure-of-merit in space (FMS), the KS
parameter representing the departure of the measured and calculated cumulative distributions from each other, and the
normalized rank (0 to 4) which is computed from the normalized sum of the previous four parameters. The URL
includes links to download the measurement data, meteorological data, statistical programs, and HYSPLIT
configuration files.

Unless otherwise noted, all HYSPLIT calculations used the NCEP North American Regional Reanalysis (3 hr; 32 km)
meteorological data for the dispersion calculations. The main characteristics of each experimental data set are
summarized as follows:

ACURATE - consisted of measuring the Kr 85 air concentrations from emissions of the Savannah River Plant,
SC. Twelve and 24 hour average air concentrations were collected for 19 months (March 1982 - September 1983)
at 5 locations along the United States east coast from 300 to 1000 km from the plant (Fayetteville, NC to Murray
Hill, NJ).

INEL74 - consisted of a little over two months of Kr-85 releases (February 27th to May 5th, 1974) from Idaho
and continuous 12 h sampling (February 27th to May 4th, 1974) at 11 locations in a line about 1500 km
downwind (Oklahoma City, OK to Minneapolis, MN).

ANATEX - consisted of 66 PerFluorocarbon Tracer (PFT) releases (33 each from two different locations -
January 5th to March 26th, 1987) every two and one half days. Air samples were collected for 3 months (January
5th to March 29th, 1987) over 24 h periods at 75 sites covering most of the eastern US and southeastern Canada.
PTCH (perfluro-trimethylcyclohexane) was released from Glasgow, Montana (GGW), and PDCH (perfluro-
dimethylcyclohexane) and PMCH (perfluro-monomethylcyclohexane) from St. Cloud, Minnesota (STC).

CAPTEX - during September and October of 1983, consisted of six 3 h PFT releases (September 18th to October
29th, 1983), four from Dayton, Ohio and two from Sudbury, Ontario, Canada with samples collected at 84 sites,
300 to 800 km from the source, at 3 h and 6 h averages for about a 48 hour duration after each release (September
18th to October 30th, 1983).

OKC80 - consisted of single release of two different PFT tracers (July 8th, 1980) over a 3 hour duration with
samples of 3 hour duration collected at 10 sites 100 km and 35 sites 600 km downwind from the Oklahoma City
release point from July 8th to July 11th, 1980.

ETEX - although there were two releases, the primary evaluation data set is from the first release on October 23,
1994 from western France. Three hour duration sampling was conducted in western europe from October 23
through October 27th.

METREX - consisted of 6 hour emissions of perfluorocarbons simultaneously, from two different locations,
every 36 hours. The tracer release locations were in suburban Washington, D.C, while 8 hour air samples were
collected at three locations within the urban area. The experiment ran for one full year. In addition monthly air
concentration samples were collected at about 60 locations throughout the region.

SRP76 - consisted of continuous emissions of Krypton-85 from the Savannah River Plant near Aiken, SC. About
15 sampling locations were placed around the plant at varying distances (15 - 150 km) and directions. Weekly
samples were collected between September 1975 and August 1977. Twice-daily samples were collected over four
months (November 1976, February, April, July of 1977).
Verification Statistics Summary

The HYSPLIT model performance by rank for the time mean (spatial correlation) and individual sample paired
concentrations for the most recent version of HYSPLIT is given. The rank is computed from the sum of the correlation,
fractional bias, figure-of-merit in space, and the Kolmogorov-Smirnov parameter. In addition, all required data are
provided with the HYSPLIT distribution to run CAPTEX release #2 and compute the resulting performance statistics
through the GUI.

Table of Contents
Concentration / Display / Ensemble / Create Files
Multiple concentration output files from the ensemble or variance dispersion simulation can be processed to produce
probability displays. This file creation step is required to create the files that are needed by the Ensemble / Display /
View Map and Box Plot tabs.

The Create Files menu calls a special program conprob that reads all the existing concentration files with a three-digit
suffix (001 to 999) and generates various probability files. The Create Files menu can be used to convert multiple
concentration output files to probability form, regardless of how they were generated. It is not necessary to use the pre-
configured ensemble version of the model to create the ensemble members. Any number of multiple simulations can be
used as long as the file suffixes end with a numeric three-digit sequence. Menu settings for the output file name should
reflect the base name without the numeric suffix. An illustration of the Ensemble Create Files menu is shown below.

The creation of probability files is limited to one pollutant species and/or one concentration level. If the input files
contain more than one of either, the index number of the desired species and level should be entered. The Aggregation
number defaults to one, which means that only the ensemble members for one time period are aggregated together to
produce the probability display. Hence there would be an independent probability map for each time period. However,
multiple time periods may be combined to produce a single probability plot. The Aggregation entry represents the
number of time periods, not the actual time span. For instance, if the model output is set to produce a one-hour average
and the model is run for 24 hours, then each ensemble member will have 24 time periods of output. By default the
probability output will represent the ensemble member variation for each hour, that is 24 frames of output will be
produced. However if the aggregation period is set to 24, then all output times will be combined into one output frame
and the ensemble result will represent the hourly variations as well as the member variations.

The last entry is optional and it represents the maximum concentration at which the probability to exceed certain
concentration values are calculated. Leaving the field equal to zero causes the program to use the order-of-magnitude
value just below the maximum concentration value of any member. If a value is set, then the program computes the
probabilities to exceed the set maximum, and the probabilities at 10% and 1% of that value.

The Create Files button runs the conprob extraction program, which will always produce all the probability variations,
one to each output file, for a given aggregation period, in the working directory. The conprob output files contain the
following information:

cnumb - Number of members with concentrations greater than zero


cmean - Concentration mean of all members
cvarn - Concentration variance of all members
ccoev - Coefficient of variation (sqrt(variance)*100/mean)
cmax01 - Probability of 1% of the concentration maximum
cmax10 - Probability at 10% of the concentration maximum
cmax00 - Probability at the maximum concentration level
prob05 - Concentrations at the 05th percentile
prob10 - Concentrations at the 10th percentile
prob25 - Concentrations at the 25th percentile
prob50 - Concentrations at the 50th percentile
prob75 - Concentrations at the 75th percentile
prob90 - Concentrations at the 90th percentile
prob95 - Concentrations at the 95th percentile

The command line options for the probability program conprob [-options] are as follows:

-b[base] input file name (cdump)


-c[concentration maximum] (0=auto | conc_high:conc_mid:conc_low)
-d[diagnostics = true]
-p[pollutant] index number for multi-pollutant files (1)
-t[temporal] aggregation period (1)
-v[value] of concentration below which is assumed to equal zero (0.0)
-z[level] index number for multi-level files (1)

Table of Contents
Advanced / Configuration Setup / WRF Vertical Interpolation
WRF Vertical Interpolation Scheme

WVERT sets vertical interpolation scheme for WRF fields. False = uses the HYSPLIT vertical interpolation scheme.
TRUE = uses the WRF vertical interpolation scheme. The WRF vertical interpolation scheme mimics the vertical
coordinate transformations within the WRF model. The difference between the two schemes is how they estimate the
change in height between vertical levels. The HYSPLIT scheme calculates the change in height using the hypsometric
equation while the WRF scheme uses the eta level, WRF dry mass, and WRF dry inverse density.

Table of Contents
Advanced / Configuration Setup / Output Center-of-Mass Trajectory
Output File Name

CMTFN sets the name for the center-of-mass trajectory output file. The default is blank, in which case the center-of-
mass trajectory will not be written.

Table of Contents
Advanced / Configuration Setup / WRF Vertical Interpolation
WRF Vertical Interpolation Scheme

WVERT sets vertical interpolation scheme for WRF fields. False = uses the HYSPLIT vertical interpolation scheme.
TRUE = uses the WRF vertical interpolation scheme. The WRF vertical interpolation scheme mimics the vertical
coordinate transformations within the WRF model. The difference between the two schemes is how they estimate the
change in height between vertical levels. The HYSPLIT scheme calculates the change in height using the hypsometric
equation while the WRF scheme uses the eta level, WRF dry mass, and WRF dry inverse density.

Table of Contents
TERRAIN.ASC file
If terrain height is not available in the meteorological file, then HYSPLIT will estimate terrain height in an
alternative way.
TERRFLG=0 The default is for HYSPLIT to estimate terrain height from the surface pressure. This method
is also used if TERRFLG=1 but the TERRAIN.ASC file cannot be found.
TERRFLG=1 If meteorological data does not contain terrain height and TERRFLG=1 then HYSPLIT will
utilize the TERRAIN.ASC file to obtain terrain height.
versions prior to v5.3.0 do not have the TERRFLG. For prior versions, the TERRAIN.ASC file is utilized if
the meteorological file has vertical coordinate system of pressure sigma levels or WRF hybrid coordinates,
otherwise the estimation method is used.
As of hysplit.v5.3.0 notifications and warnings will be written to the MESSAGE file when either surface pressure
or terrain height is not present in the meteorological file.

You might also like