Wflow Readthedocs Io en Latest
Wflow Readthedocs Io en Latest
Jaap Schellekens
1 Introduction 2
1.1 Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 How to use the models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 Building a model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.4 Questions and answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.5 Available models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.6 The framework and settings for the framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
1.7 The wflow Delft-FEWS adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
1.8 Wflow modules and libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
1.9 BMI: Basic modeling interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
1.10 Using the wflow modelbuilder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
1.11 Release notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
1.12 Linking wflow to OpenDA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
2 References 189
4 TODO 192
Note: There will be no further developments in the Python wflow framework (bugfixes are possible), and the docu-
mentation is no longer updated. Developments continue in the Julia package Wflow, available here, including docu-
mentation.
1
CHAPTER
ONE
INTRODUCTION
This document describes the wflow distributed hydrological modelling platform. wflow is part of the Deltares’ Open-
Streams project. Wflow consists of a set of python programs that can be run on the command line and perform
hydrological simulations. The models are based on the PCRaster python framework (www.pcraster.eu). In wflow
this framework is extended (the wf_DynamicFramework class) so that models build using the framework can be
controlled using the API. Links to BMI and OpenDA (www.openda.org) have been established. All code is available
at github (https://github.com/openstreams/wflow/) and distributed under the GPL version 3.0.
The wflow distributed hydrological model platform currently includes the following models:
• the wflow_sbm model (derived from topog_sbm )
• the wflow_hbv model (a distributed version of the HBV96 model).
• the wflow_gr4 model (a distributed version of the gr4h/d models).
• the wflow_W3RA and wflow_w3 models (implementations and adaptations of the Australian Water Resources
Assessment Landscape model (AWRA-L))
• the wflow_topoflex model (a distributed version of the FLEX-Topo model)
• the wflow_pcrglobwb model (PCR-GLOBWB (PCRaster Global Water Balance, v2.1.0_beta_1))
• the wflow_sphy model (SPHY (Spatial Processes in HYdrology, version 2.1))
• the wflow_stream model (STREAM (Spatial Tools for River Basins and Environment and Analysis of Manage-
ment Options))
• the wflow_routing model (a kinematic wave model that can run on the output of one of the hydrological models
optionally including a floodplain for more realistic simulations in areas that flood).
• the wflow_wave model (a dynamic wave model that can run on the output of the wflow_routing model).
• the wflow_floodmap model (a flood mapping model that can use the output of the wflow_wave model or de
wflow_routing model).
• the wflow_sediment model (an experimental erosion and sediment dynamics model that uses the output of the
wflow_sbm model).
• the wflow_lintul model (rice crop growth model LINTUL (Light Interception and Utilization))
The low level api and links to other frameworks allow the models to be linked as part of larger modelling systems:
2
WFLOW_HBV WFLOW_SBM
OpenDA
Note: wflow is part of the Deltares OpenStreams project (http://www.openstreams.nl). The OpenStreams project is a
work in progress. Wflow functions as a toolkit for distributed hydrological models within OpenStreams.
Note: As part of the eartH2Observe project global dataset of forcing data has been compiled that can also be used with
the wflow models. A set of tools is available that can work with wflow (the wflow_dem.map file) to extract data from
the server and downscale these for your wflow model. Check https://github.com/earth2observe/downscaling-tools for
the tools. A description of the project can be found at http://www.earth2observe.eu and the data server can be access
via http://wci.earth2observe.eu
The different wflow models share the same structure but are fairly different with respect to the conceptualisation.
The shared software framework includes the basic maps (dem, landuse, soil etc) and the hydrological routing via the
kinematic wave. The Python class framework also exposes the models as an API and is based on PCRaster/Python.
The wflow_sbm model maximises the use of available spatial data. Soil depth, for example, is estimated from the
DEM using a topographic wetness index . The model is derived from the CQflow model (Köhler et al., 2006) that
has been applied in various countries, most notably in Central America. The wflow_hbv model is derived from the
HBV-96 model but does not include the routing functions, instead it uses the same kinematic wave routine as the
wflow_sbm model to route the water downstream.
The models are programmed in Python using the PCRaster Python extension. As such, the structure of the model is
transparent, can be changed by other modellers easily, and the system allows for rapid development.
3
1.1 Installation
By far the easiest way to install wflow, is using the conda package manager. This package manager comes with the
Anaconda Python distribution. wflow is available in the conda-forge channel. To install you can use the following
command:
• conda install -c conda-forge wflow
If this works it will install wflow with all dependencies including Python and PCRaster, and you skip the rest of the
installation instructions.
The main dependencies for wflow are an installation of Python 3.6+, and PCRaster 4.2+. Only 64 bit OS/Python is
supported.
Installing Python
For Python we recommend using the Anaconda Distribution for Python 3, which is available for download from
https://www.anaconda.com/download/. The installer gives the option to add python to your PATH environment
variable. We will assume in the instructions below that it is available in the path, such that python, pip, and conda
are all available from the command line.
Note that there is no hard requirement specifically for Anaconda’s Python, but often it makes installation of required
dependencies easier using the conda package manager.
Installing pcraster
• If you are using conda, pcraster will be installed automatically in the section below, otherwise:
• Download pcraster from http://pcraster.geo.uu.nl/ website (version 4.2+)
• Follow the installation instructions at http://pcraster.geo.uu.nl/quick-start-guide/
The easiest and most robust way to install wflow is by installing it in a separate conda environment. In the root repos-
itory directory there is an environment.yml file. This file lists all dependencies. Either use the environment.
yml file from the master branch (please note that the master branch can change rapidly and break functionality without
warning), or from one of the releases {release}.
Run this command to start installing all wflow dependencies:
• conda env create -f environment.yml
This creates a new environment with the name wflow. To activate this environment in a session, run:
• activate wflow
For the installation of wflow there are two options (from the Python Package Index (PyPI) or from Github). To install
a release of wflow from the PyPI (available from release 2018.1):
• pip install wflow=={release}
To install directly from GitHub (from the HEAD of the master branch):
• pip install git+https://github.com/openstreams/wflow.git
1.1. Installation 4
or from Github from a specific release:
• pip install git+https://github.com/openstreams/wflow.git@{release}
Now you should be able to start this environment’s Python with python, try import wflow to see if the package
is installed.
More details on how to work with conda environments can be found here: https://conda.io/docs/user-guide/tasks/
manage-environments.html
If you are planning to make changes and contribute to the development of wflow, it is best to make a git clone of the
repository, and do a editable install in the location of you clone. This will not move a copy to your Python installation
directory, but instead create a link in your Python installation pointing to the folder you installed it from, such that any
changes you make there are directly reflected in your install.
• git clone https://github.com/openstreams/wflow.git
• cd wflow
• activate wflow
• pip install -e .
Alternatively, if you want to avoid using git and simply want to test the latest version from the master branch, you
can replace the first line with downloading a zip archive from GitHub: https://github.com/openstreams/wflow/archive/
master.zip
Besides the recommended conda environment setup described above, you can also install wflow with pip. For the
more difficult to install Python dependencies, it is best to use the conda package manager:
• conda install numpy scipy gdal netcdf4 cftime xarray pyproj numba
python-dateutil
Then install a release {release} of wflow (available from release 2018.1) with pip:
• pip install wflow=={release}
If you want to avoid using conda, an example of a PCRaster build and pip install on Ubuntu Linux can be found in
issue #36.
To check it the install is successful, go to the examples directory and run the following command:
• python -m wflow.wflow_sbm -C wflow_rhine_sbm -R testing
This should run without errors.
1.1. Installation 5
1.2 How to use the models
A case is a directory holding all the data needed to run the model. Multiple cases may exist next to each other in
separate directories. The model will only work with one case at the time. If no case is specified when starting the
model a default case (default_sbm or default_hbv) is assumed. Within a case the model output (the results) are stored
in a separate directory. This directory is called the run, indicated with a runId. This structure is indicated in the figure
below:
Case
If you want to save the results and not overwrite the results from a previous run a new runId must be specified.
inmaps Directory holding the dynamic input data. Maps of Precipitation, potential evapotranspiration and (option-
ally) temperature in pcraster mapstack format.
instate Directory holding the input initial conditions. Can be used to hotstart the model. Alternatively the model can
start with default initial conditions but in that case a long spinup procedure may be needed. This is done using
the -I command-line option.
intbl Directory holding the lookup tables. These hold the model parameters specified per landuse/soiltype class. Note
that you can use the -i option to specify an alternative name (e.g. to support an alternative model calibration).
Optionally a .tbl.mult file can be given for each parameter. This file is used after loading the .tbl file or .map file
to multiply the results with. Can be used for calibration etc.
intss Directory holding the scalar input timeseries. Scalar input data is only assumed if the ScalarInput entry in the
ini file is set to 1 (True).
outstate Directory holding the stat variable at the end of the run. These can be copied back to the instate directory to
have the model start from these conditions. These are also saves in the runId/outstate directory
run_default The default name for a run. if no runId is given all output data is saved in this directory.
staticmaps Static maps (DEM, etc) as prepared by the wflow_prep script.
wflow_sbm|hbv.ini The default settings file for wflow_sbm of wflow_hbv
Overview
In general the model is run from the dos/windows/linux command line. Based on the system settings you can call the
wflow_[sbm|hbv].py file directly or you need to call python with the script as the first argument e.g.:
In the example above the wflow_sbm model is run using the information in case myCase storing the results in runId
calib_run. A total to 365 timesteps is performed and the the model will overwrite existing output in the calib_run
directory. The default .ini file wflow_sbm.ini located in the myCase directory is read at startup.
Command-line options
The command line options for wflow_sbm are summarized below, use wflow_sbm -h to view them at the command
line (option for other models may be different, see their respective documentation to see the options):
-X: save state at the end of the run over the initial conditions at the start
-f: Force overwrite of existing results
-T: Set end time of the run: yyyy-mm-dd hh:mm:ss
-S: Set start time of the run: yyyy-mm-dd hh:mm:ss
-s: Set the model timesteps in seconds
-I: re-initialize the initial model conditions with default
-i: Set input table directory (default is intbl)
-x: Apply multipliers (-P/-p ) for subcatchment only (e.g. -x 1)
-C: set the name of the case (directory) to run
-R: set the name runId within the current case
-L: set the logfile
-c: name of wflow configuration file (default: Casename/wflow_sbm.ini).
-h: print usage information
-P: set parameter change string (e.g: -P "self.FC = self.FC * 1.6") for non-dynamic
˓→variables
dynamic variables
-l: loglevel (most be one of DEBUG, WARNING, ERROR)
wflow_sbm|hbv.ini file
The wflow_sbm|hbv.ini file holds a number of settings that determine how the model is operated. The files consists of
sections that hold entries. A section is defined using a keyword in square brackets (e.g. [model]). Variables can be set
in each section using a keyword = value combination (e.g. ScalarInput = 1). The default settings for the
ini file are given in the subsections below.
[model] Options for all models:
ModelSnow = 0 Set to 1 to model snow using a simple degree day model (in that case temperature data is needed).
WIMaxScale = 0.8 Scaling for the topographical wetness vs soil depth method.
[outputmaps]
self.RiverRunoff=run
self.SnowMelt=sno
(continues on next page)
Tip: NB See the wflow_sbm.py code for all the available variables as this list is incomplete. Also check the framwe-
work documentation for the [run] section
The values on the right side of the equal sign can be choosen freely.
Example content:
Self.RiverRunoff=run
self.Transfer=tr
self.SatWaterDepth=swd
[outputcsv_0-n] [outputtss_0-n]
Number of sections to define output timeseries in csv format. Each section should at lears contain one samplemap item
and one or more variables to save. The samplemap is the map that determines how the timesries are averaged/sampled.
All other items are variabale filename pairs. The filename is given relative to the case directory.
Example:
[outputcsv_0]
samplemap=staticmaps/wflow_subcatch.map
self.RiverRunoffMM=Qsubcatch_avg.csv
[outputcsv_1]
samplemap=staticmaps/wflow_gauges.map
self.RiverRunoffMM=Qgauge.csv
[outputtss_0]
samplemap=staticmaps/wflow_landuse.map
self.RiverRunoffMM=Qlu.tss
In the above example the river discharge of this model (self.RiverRunoffMM) is saved as an average per subcatchment,
a sample at the gauge locations and as an average per landuse.
[inputmapstacks]
This section can be used to overwrite the default names of the input mapstacks
Precipitation = /inmaps/P timeseries for rainfall
EvapoTranspiration = /inmaps/PET potential evapotranspiration
Temperature = /inmaps/TEMP temperature time series
Inflow = /inmaps/IF in/outflow locations (abstractions)
If a file (in .tss format) with measured discharge is specified using the -U command-line option the model will try to
update (match) the flow at the outlet to the measured discharge. In that case the -u option should also be specified to
indicate which of the columns must be used. When updating is switched on the following steps are taken:
• the difference at the outlet between measured and simulated Q (in mm) is determined
• this difference is added to the unsaturated store for all cells
• the ratio of measured Q divided by simulated Q at the outlet is used to multiply the kinematic wave store with.
This ratio is scaled according to a maximum distance from the gauge.
Please note the following points when using updating:
• The tss file should have as many columns as there are gauges defined in the model
• The tss file should have enough data points to cover the simulation time
• The -U options should be used to specify which columns to actually use and in which order to use them. For
example: -u ‘[1,3,2]’ indicates to use column 1,2 and 3 in that order.
[layout]
sizeinmetres = 1
[fit]
areamap = staticmaps/wflow_subcatch.map
areacode = 1
Q = testing.tss
WarmUpSteps = 1
ColMeas = 0
parameter_1 = RootingDepth
parameter_0 = M
ColSim = 0
[misc]
[outputmaps]
self.RiverRunoff = run
[framework]
debug = 0
outputformat = 1
[inputmapstacks]
Inflow = /inmaps/IF
Precipitation = /inmaps/P
Temperature = /inmaps/TEMP
EvapoTranspiration = /inmaps/PET
[model]
wflow_river = staticmaps/wflow_river.map
InterpolationMethod = inv
(continues on next page)
The actual data requirements depend on the application of the model. The following list summarizes the data require-
ments:
• Static data
– Digital Elevation Model (DEM)
– A Land Cover map
– A map representing Soil physical parameters (the Land Cover map can also be used)
• Dynamic data (spatial time series, map-stacks)
– Precipitation
– Potential evapotranspiration
– Temperature (optional, only needed for snow pack modelling)
• Model parameters (per land use/soil type)
– Soil Depth
– etc. . . (see Input parameters (lookup tables or maps))
Setting-up a new model first starts with making a number of decisions and gathering the required data:
1. Do I have the static input maps in pcraster format (DEM ,land-use map, soil map)?
2. What resolution do I want to run the model on?
3. Do I need to define multiple sub-catchments to report totals/flows for seperately?
4. What forcing data do I have available for the model (P, Temp, ET)?
5. Do I have gridded forcing data or scalar time-series?
Note: Quantum Gis (QGIS) can read and write pcraster maps (via gdal) and is a very handy tool to support data
preparation.
Note: Within the earth2observe project tools are being made to automatically download and downscale reanalysis
date to be used as forcing to the wflow models. See https://github.com/earth2observe/downscaling-tools
Depending on the formats of the data some converting of data may be needed. The procedure described below assumes
you have the main maps available in pcraster format. If that is not the case free tools like Qgis (www.qgis.org) and
gdal can be used to convert the maps to the required format. Qgis is also very handy to see if the results of the scripts
match reality by overlaying it with a google maps or OpenStreetMap layer using the qgis openlayers plugin.
When all data is available setting up the model requires the following steps:
1. Run the wflow_prepare_step1 and 2 scripts or prepare the input maps by hand (see Preparing static input maps)
2. Setup the wflow model directory structure (Setup a case) and copy the files (results from step2 of the prepare
scripts) there (see Setting Up a Case)
3. Setup the .ini file
4. Test run the model
5. Supply all the .tbl files (or complete maps) for the model parameters (see Input parameters (lookup tables or
maps))
6. Calibrate the model
Introduction
Preparing the input maps for a distributed model is not always trivial. wflow comes with two scripts that help in
this process. The scripts are made with the assumption that the base DEM you have is a higher resolution as the
DEM you want to use for the final model. When upscaling the scripts try to maintain as much information from the
high resolution DEM as possible. The procedure described here can be used for all wflow models (wflow_sbm or
wflow_hbv).
The scripts assume you have a DEM, landuse and soil map available in pcraster format. If you do not have a soil or
landuse map the you can generate a uniform map. The resolution and domain of these maps does not need to be the
same, the scripts will take care of resampling. The process is devided in two scripts, wflow_prepare_step1.py and
wflow_prepare_step2.py. In order to run the scripts the following maps/files need to be prepared.
Note: Both scripts need pcraster and gdal executables (version >= 1.10) to be available in your computers search path
Tip: Another option is to prepare a “pseudo dem” from a shape file with already defined catchment
boundaries and outlets. Here all non boundary points would get a value of 1, all boundaries a value of
2 and all outlets a value of -10. This helps in generating a ldd for polder areas or other areas where the
topography is not the major factor in determining the drainage network.
4. Determine various statistics and also the largest catchment present in the DEM. This area will be used later
on to make sure the catchments derived in the second step will match the catchment derived from the high
resolution DEM
Once the script is finished successfully the following maps should have been created, the data type is shown between
brackets:
• wflow_catchment.map (ordinal)
• wflow_dem.map (scalar)
• wflow_demmax.map (scalar)
• wflow_demmin.map (scalar)
• wflow-dem*percentile* - (10,25,33,50,66,75,90) (scalar)
• wflow_gauges.map (ordinal)
• wflow_landuse.map (nominal)
• wflow_soil.map (nominal)
• wflow_ldd.map (ldd)
• wflow_outlet.map (scalar)
• wflow_riverburnin.map (boolean)
• wflow_riverlength_fact.map (scalar)
• wflow_river.map (ordinal)
• wflow_streamorder.map (ordinal)
• wflow_subcatch.map (ordinal)
The maps are created in the data processing directory. To use the maps in the model copy them to the staticmaps
directory of the case you have created.
Note: Getting the subcatchment right can be a bit of a problem. In order for the subcatchment calculations to succeed
the gauges that determine the outlets must be on a river grid cell. If the subcatchment creation causes problems the
best way to check what is going on is to import both wflow_gauges,map en wflow_streamorder.map in qgis so you can
check if the gauges are on a river cell. In the ini file you define the order above which a grid cell is regarded as a river.
Note: If the cellsize of the output maps is identical to the input DEM the second script shoudl NOT be run. All data
will be produced by the first script.
[directories]
# all paths are relative to the workdir set on the command line
# The directories in which the scripts store the output:
step1dir = step1
step2dir = step2
[files]
# Name of the DEM to use
masterdem=srtm_58_14.map
# name of the lad-use map to use
landuse=globcover_javabali.map
soil=soil.map
# Shape file with river/drain network. Use to "burn in" into the dem.
river=river.shp
riverattr=river
# The riverattr above should be the shapefile-name without the .shp extension
[settings]
# Nr to reduce the initial map with in step 1. This means that all work is done
# on an upscaled version of the initial DEM. May be usefull for very
# large maps. If set to 1 (default) no scaling is taking place
initialscale=1
# Set lddmethod to dem (other methods are not working at the moment)
lddmethod=dem
# If set to 1 the gauge points are moved to the neares river point on a river
# with a strahler order higher of identical as defined in this ini file
snapgaugestoriver=1
# The strahler order above (and including) a pixel is defined as a river cell
riverorder=4
gauges_y = -6.1037
gauges_x = 107.4357
Problems
In many cases the scripts will not produce the maps the way you want them in the first try. The most common problems
are:
1. The gauges do not coincide with a river and thus the subcatchment is not correct
• Move the gauges to a location on the rivers as determined by the scripts. The best way to do this is to load
the wflow_streamorder.map in qgis and use the cursor to find the nearest river cell for a gauge.
2. The delimited catchment is not correct even if the gauges is at the proper location
• Get a better DEM or fix the current DEM.
• Use a river shape file to fix the river locations
• Use a catchment mask to force the catchment delineated to use that. Or just clip the DEM with the
catchment mask. In the latter case use the lddin option to make sure you use the entire catchment.
If you still run into problems you can adjust the scripts yourself to get better results.
wflow_prepare_step1
wflow data preparation script. Data preparation can be done by hand or using the two scripts. This script does the first
step. The second script does the resampling. This scripts need the pcraster and gdal executables to be available in you
search path.
Usage:
$Id: $
wflow_prepare_step1.OpenConf(fn)
wflow_prepare_step2
wflow data preparation script. Data preparation can be done by hand or using the two scripts. This script does the
resampling. This scripts need the pcraster and gdal executables to be available in you search path.
Usage:
$Id: $
wflow_prepare_step2.OpenConf(fn)
wflow_prepare_step2.configget(config, section, var, default)
wflow_prepare_step2.main()
wflow_prepare_step2.resamplemaps(step1dir, step2dir)
Resample the maps from step1 and rename them in the process
wflow_prepare_step2.usage(*args)
PM
Note: Describes how to setup a model case structure. Probably need to write a script that does it automatically.
See wf_DynamicFramework for information on the settings in the ini file. The model specific settings are described
seperately for each model.
The PCRaster lookup tables listed below are used by the model to create input parameter maps. Each table should have
at least four columns. The first column is used to identify the land-use class in the wflow_landuse map, the second
column indicates the subcatchment (wflow_subcatch), the third column the soil type (wflow_soil.map) and the last
column list the value that will be assigned based on the first three columns.
Alternatively the lookup table can be replaced by a PCRaster map (in the staticmaps directory) with the same name as
the tbl file (but with a .map extension).
Note: Note that the list of model parameters is (always) out of date. Getting the .tbl files from the example models
(default_sbm and default_hbv) is probably the best way to start. In any case wflow will use default values for the tbl
files that are missing. (shown in the log messages).
Below the contents of an example .tbl file is shown. In this case the parameters are identical for each subcatchment
(and soil type) but is different for each landuse type. See the pcraster documentation (http://www.pcraster.eu) for
details on how to create .tbl files.
1 <,14] 1 0.11
2 <,14] 1 0.11
3 <,14] 1 0.15
4 <,14] 1 0.11
5 <,14] 1 0.11
6 <,14] 1 0.11
Note: please note that if the rules in the tbl file do not cover all cells used in the model you will get missing values
in the output. Check the maps in the runid/outsum directory to see if this is the case. Also, the model will generate a
error message in the log file if this is the case so be sure to check the log file if you encounter problems. The message
will read something like: “ERROR: Not all catchment cells have a value for. . . ”
1.4.1 Questions
1
The discharge in the timeseries output gives weird numbers (1E31) what is going wrong?
1 The discharge in the timeseries output gives weird numbers (1E31) what is going wrong? The 1E31 values indicate missing values. This
probably means that at least one of the cells in the part upstreasm of the discharge points has a missing value. Missing values are routed downstream
so any missing values upstreams of a discharge will cause the discharge to eventually become a missing value. To resolve this check the following:
• Check if the .tbl files are correct (do they cover all values in the landuse soil and subcatchment maps)
• check for missing values in the input maps
• check of model parameters are within the working range: e.g. you have set a parameter (e.g. the canopy gap fraction in the interception
model > 1) to an unrealistic value
• check all maps in the runId/outsum directory so see at which stage the missing values starts
• the soil/landuse/catchment maps does not cover the whole domain
Note: note that missing values in upstreams cells are routed down and will eventually make all downstreams values missing. Check the maps in
the runid/outsum directory to see if the tbl files are correct
1.4.2 Answers
Introduction
The Hydrologiska Byrans Vattenbalansavdelning (HBV) model was introduced back in 1972 by the Swedisch Mete-
ological and Hydrological Institute (SMHI). The HBV model is mainly used for runoff simulation and hydrological
forecasting. The model is particularly useful for catchments where snow fall and snow melt are dominant factors, but
application of the model is by no means restricted to these type of catchments.
Description
The model is based on the HBV-96 model. However, the hydrological routing represent in HBV by a triangular
function controlled by the MAXBAS parameter has been removed. Instead, the kinematic wave function is used to
route the water downstream. All runoff that is generated in a cell in one of the HBV reservoirs is added to the kinematic
wave reservoir at the end of a timestep. There is no connection between the different HBV cells within the model.
Wherever possible all functions that describe the distribution of parameters within a subbasin have been removed as
this is not needed in a distributed application/
A catchment is divided into a number of grid cells. For each of the cells individually, daily runoff is computed through
application of the HBV-96 of the HBV model. The use of the grid cells offers the possibility to turn the HBV modelling
concept, which is originally lumped, into a distributed model.
The figure above shows a schematic view of hydrological response simulation with the HBV-modelling concept. The
land-phase of the hydrological cycle is represented by three different components: a snow routine, a soil routine and a
runoff response routine. Each component is discussed separately below.
2 How do a setup a wflow model? First read the section on Setting-up a new model. Next check one of the supplied example models
3 Why do I have missing values in my model output? See question?
4
wflow stops and complains about types not matching The underlying pcraster framework is very picky about data types. As such the maps must
all be of the expected type. e.g. your landuse map MUST be nominal. See the pcraster documentation at pcraster.eu for more information
Note: If you create maps with qgis (or gdal) specify the right output type (e.g. Float32 for scalar maps)
5 wflow complains about missing initial state maps run the model with the -I option first and copy the resulting files in runid/outstate back to the
instate directory
6 in some areas the mass balance error seems large The simple explicit solution of most models can cuase this, especially when parameter
values are outside the nomally used range and with large timsteps. For example, setting the soil depth to zero will usually cause large errors. The
solution is usually to check the parameters throughout the model.
Precipitation enters the model via the snow routine. If the air temperature, 𝑇𝑎 , is below a user-defined threshold
𝑇 𝑇 (≈ 0𝑜 𝐶) precipitation occurs as snowfall, whereas it occurs as rainfall if 𝑇𝑎 ≥ 𝑇 𝑇 . A another parameter 𝑇 𝑇 𝐼
defines how precipitation can occur partly as rain of snowfall (see the figure below). If precipitation occurs as snowfall,
it is added to the dry snow component within the snow pack. Otherwise it ends up in the free water reservoir, which
represents the liquid water content of the snow pack. Between the two components of the snow pack, interactions take
place, either through snow melt (if temperatures are above a threshold 𝑇 𝑇 ) or through snow refreezing (if temperatures
are below threshold 𝑇 𝑇 ). The respective rates of snow melt and refreezing are:
𝑄𝑚 = 𝑐𝑓 𝑚𝑎𝑥(𝑇𝑎 − 𝑇 𝑇 ) ; 𝑇𝑎 > 𝑇 𝑇
𝑄𝑟 = 𝑐𝑓 𝑚𝑎𝑥 * 𝑐𝑓 𝑟(𝑇 𝑇 − 𝑇𝑎 ) ; 𝑇𝑎 < 𝑇 𝑇
where 𝑄𝑚 is the rate of snow melt, 𝑄𝑟 is the rate of snow refreezing, and $cfmax$ and $cfr$ are user defined model
parameters (the melting factor 𝑚𝑚/(𝑜 𝐶 * 𝑑𝑎𝑦) and the refreezing factor respectively)
Note: The FoCFMAX parameter from the original HBV version is not used. instead the CFMAX is presumed to be
for the landuse per pixel. Normally for forested pixels the CFMAX is 0.6 {*} CFMAX
The air temperature, 𝑇𝑎 , is related to measured daily average temperatures. In the original HBV-concept, elevation dif-
ferences within the catchment are represented through a distribution function (i.e. a hypsographic curve) which makes
the snow module semi-distributed. In the modified version that is applied here, the temperature, 𝑇𝑎 , is represented in
a fully distributed manner, which means for each grid cell the temperature is related to the grid elevation.
The fraction of liquid water in the snow pack (free water) is at most equal to a user defined fraction, 𝑊 𝐻𝐶, of the
water equivalent of the dry snow content. If the liquid water concentration exceeds 𝑊 𝐻𝐶, either through snow melt
or incoming rainfall, the surpluss water becomes available for infiltration into the soil:
where 𝑄𝑖𝑛 is the volume of water added to the soil module, 𝑆𝑊 is the free water content of the snow pack and 𝑆𝐷 is
the dry snow content of the snow pack.
The snow model als has an optional (experimental) ‘mass-wasting’ routine. This transports snow downhill using the
local drainage network. To use it set the variable MassWasting in the model section to 1.
# Masswasting of snow
# 5.67 = tan 80 graden
SnowFluxFrac = min(0.5,self.Slope/5.67) * min(1.0,self.DrySnow/MaxSnowPack)
MaxFlux = SnowFluxFrac * self.DrySnow
self.DrySnow = accucapacitystate(self.TopoLdd,self.DrySnow, MaxFlux)
self.FreeWater = accucapacitystate(self.TopoLdd,self.FreeWater,SnowFluxFrac * self.
˓→FreeWater )
Glaciers
The original HBV version includes both a multiplication factor for potential evaporation and a exponential reduction
factor for potential evapotranspiration during rain events. The 𝐶𝐸𝑉 𝑃 𝐹 factor is used to connect potential evapotran-
spiration per landuse. In the original version the 𝐶𝐸𝑉 𝑃 𝐹 𝑂 is used and it is used for forest landuse only.
Interception
The parameters 𝐼𝐶𝐹 0 and 𝐼𝐶𝐹 𝐼 introduce interception storage for forested and non-forested zones respectively in
the original model. Within our application this is replaced by a single $ICF$ parameter assuming the parameter is
set for each grid cell according to the land-use. In the original application it is not clear if interception evaporation
is subtracted from the potential evaporation. In this implementation we dos subtract the interception evaporation to
ensure total evaporation does not exceed potential evaporation. From this storage evaporation equal to the potential
rate 𝐸𝑇𝑝 will occur as long as water is available, even if it is stored as snow. All water enters this store first, there is
no concept of free throughfall (e.g. through gaps in the canopy). In the model a running water budget is kept of the
interception store:
• The available storage (ICF-Actual storage) is filled with the water coming from the snow routine (𝑄𝑖𝑛 )
• Any surplus water now becomes the new 𝑄𝑖𝑛
• Interception evaporation is determined as the minimum of the current interception storage and the potential
evaporation
The incoming water from the snow and interception routines, 𝑄𝑖𝑛 , is available for infiltration in the soil routine. The
soil layer has a limited capacity, 𝐹𝑐 , to hold soil water, which means if 𝐹𝑐 is exceeded the abundant water cannot
infiltrate and, consequently, becomes directly available for runoff.
where 𝑄𝑑𝑟 is the abundant soil water (also referred to as direct runoff) and 𝑆𝑀 is the soil moisture content. Conse-
quently, the net amount of water that infiltrates into the soil, 𝐼𝑛𝑒𝑡 , equals:
Part of the infiltrating water, 𝐼𝑛𝑒𝑡 , will runoff through the soil layer (seepage). This runoff volume, 𝑆𝑃 , is related to
the soil moisture content, 𝑆𝑀 , through the following power relation:
(︂ )︂𝛽
𝑆𝑀
𝑆𝑃 = 𝐼𝑛𝑒𝑡
𝐹𝑐
where 𝛽 is an empirically based parameter. Application of this equation implies that the amount of seepage water
increases with increasing soil moisture content. The fraction of the infiltrating water which doesn’t runoff, 𝐼𝑛𝑒𝑡 − 𝑆𝑃 ,
is added to the available amount of soil moisture, 𝑆𝑀 . The 𝛽 parameter affects the amount of supply to the soil
moisture reservoir that is transferred to the quick response reservoir. Values of 𝛽 vary generally between 1 and 3.
Larger values of 𝛽 reduce runoff and indicate a higher absorption capacity of the soil (see Figure ref{fig:HBV-Beta}).
A percentage of the soil moisture will evaporate. This percentage is related to the measured potential evaporation and
the available amount of soil moisture:
𝑆𝑀
𝐸𝑎 = 𝐸𝑝 ; 𝑆𝑀 < 𝑇𝑚
𝑇𝑚
𝐸𝑎 = 𝐸𝑝 ; 𝑆𝑀 ≥ 𝑇𝑚
The volume of water which becomes available for runoff, 𝑆𝑑𝑟 + 𝑆𝑃 , is transferred to the runoff response routine. In
this routine the runoff delay is simulated through the use of a number of linear reservoirs.
Two linear reservoirs are defined to simulate the different runoff processes: the upper zone (generating quick runoff
and interflow) and the lower zone (generating slow runoff). The available runoff water from the soil routine (i.e. direct
runoff, 𝑆𝑑𝑟 , and seepage, 𝑆𝑃 ) in principle ends up in the lower zone, unless the percolation threshold, 𝑃 𝐸𝑅𝐶, is
exceeded, in which case the redundant water ends up in the upper zone:
where 𝑉𝑈 𝑍 is the content of the upper zone, 𝑉𝐿𝑍 is the content of the lower zone and △ means increase of.
Capillary flow from the upper zone to the soil moisture reservoir is modeled according to:
𝑄𝑞 = 𝐾 * 𝑈 𝑍 (1+𝑎𝑙𝑝ℎ𝑎)
here 𝐾 is the upper zone recession coefficient, and 𝛼 determines the amount of non-linearity. Within HBV-96, the
value of 𝐾 is determined from three other parameters: 𝛼, 𝐾𝐻𝑄, and 𝐻𝑄 (mm/day). The value of 𝐻𝑄 represents an
outflow rate of the upper zone for which the recession rate is equal to 𝐾𝐻𝑄. if we define 𝑈 𝑍𝐻𝑄 to be the content of
the upper zone at outflow rate 𝐻𝑄 we can write the following equation:
(1+𝛼)
𝐻𝑄 = 𝐾 * 𝑈 𝑍𝐻𝑄 = 𝐾𝐻𝑄 * 𝑈 𝑍𝐻𝑄
Note: Note that the HBV-96 manual mentions that for a recession rate larger than 1 the timestap in the model will be
adjusted.
If the total water content of the upper zone, 𝑉𝑈 𝑍 , is lower than a threshold 𝑈 𝑍1, the upper zone only generates
interflow. On the other hand, if 𝑉𝑈 𝑍 exceeds 𝑈 𝑍1, part of the upper zone water will runoff as quick flow:
𝑄𝑖 = 𝐾𝑖 * 𝑚𝑖𝑛{𝑈 𝑍1; 𝑉𝑢𝑧 }
𝑄𝑞 = 𝐾𝑞 * 𝑚𝑎𝑥{(𝑉𝑈 𝑍 − 𝑈 𝑍1); 0.0}
Where 𝑄𝑖 is the amount of generated interflow in one time step, 𝑄𝑞 is the amount of generated quick flow in one time
step and 𝐾𝑖 and 𝐾𝑞 are reservoir constants for interflow and quick flow respectively.
The total runoff rate, 𝑄, is equal to the sum of the three different runoff components:
𝑄 = 𝑄𝐿𝑍 + 𝑄𝑖 + 𝑄𝑞
Subcatchment flow
Normally the the kinematic wave is continuous throughout the model. By using the the SubCatchFlowOnly entry in
the model section of the ini file all flow is at the subcatchment only and no flow is transferred from one subcatchment
to another. This can be handy when connecting the result of the model to a water allocation model such as Ribasim.
Example:
[model]
SubCatchFlowOnly = 1
Introduction
The soil part of wflow_sbm model has its roots in the topog_sbm model but has had considerable changes over time.
topog_sbm is specifically designed to simulate fast runoff processes in small catchments while wflow_sbm can be
applied more widely. The main differences are:
• The unsaturated zone can be split-up in different layers
• The addition of evapotranspiration losses
• The addition of a capilary rise
• Wflow routes water over a D8 network while topog uses an element network based on contour lines and trajec-
tories.
The sections below describe the working of the model in more detail.
Limitations
The wflow_sbm concept uses the kinematic wave approach for channel, overland and lateral subsurface flow, assuming
that the topography controls water flow mostly. This assumption holds for steep terrain, but in less steep terrain
the hydraulic gradient is likely not equal to the surface slope (subsurface flow), or pressure differences and inertial
momentum cannot be neglected (channel and overland flow). In addition, while the kinemative wave equations are
solved with a nonlinear scheme using Newton’s method (Chow, 1988), other model equations are solved through a
simple explicit scheme. In summary the following limitations apply:
• Channel flow, and to a lesser degree overland flow, may be unrealistic in terrain that is not steep, and where
pressure forces and inertial momentum are important.
• The lateral movement of subsurface flow may be very wrong in terrain that is not steep.
• The simple numerical solution means that results from a daily timestep model may be different from those with
an hourly timestep.
The wflow_sbm model assumes the input to be potential evaporation. In many cases the evaporation will be a reference
evaporation for a different land cover. In that case you can use the et_reftopot.tbl file to set the mutiplication per
landuse to go from the supplied evaporation to the potential evaporation for each land cover. By default al is set to 1.0
assuming the evaporation to be potential.
Snow
[model]
ModelSnow = 1
# Masswasting of snow
# 5.67 = tan 80 graden
SnowFluxFrac = min(0.5,self.Slope/5.67) * min(1.0,self.DrySnow/MaxSnowPack)
MaxFlux = SnowFluxFrac * self.DrySnow
self.DrySnow = accucapacitystate(self.TopoLdd,self.DrySnow, MaxFlux)
self.FreeWater = accucapacitystate(self.TopoLdd,self.FreeWater,SnowFluxFrac * self.
˓→FreeWater )
Glaciers
Infiltration
If the surface is (partly) saturated the throughfall and stemflow that falls onto the saturated area is added to the river
runoff component (based on fraction rivers, self.RiverFrac) and to the overland runoff component (based on open
water fraction (self.WaterFrac) minus self.RiverFrac). Infiltration of the remaining water is determined as follows:
The soil infiltration capacity can be adjusted in case the soil is frozen, this is optional and can be set in the ini file as
follows:
[model]
soilInfRedu = 1
The remaining storage capacity of the unsaturated store is determined. The infiltrating water is split in two parts, the
part that falls on compacted areas and the part that falls on non-compacted areas. The maximum amount of water that
can infiltrate in these areas is calculated by taking the minimum of the maximum infiltration rate (InfiltCapsoil for
non-compacted areas and InfiltCapPath for compacted areas) and the water on these areas. The water that can actual
A detailed description of the Topog_SBM model has been given by Vertessy (1999). Briefly: the soil is considered as
a bucket with a certain depth (𝑧𝑡 ), divided into a saturated store (𝑆) and an unsaturated store (𝑈 ), the magnitudes of
which are expressed in units of depth. The top of the 𝑆 store forms a pseudo-water table at depth 𝑧𝑖 such that the value
of 𝑆 at any time is given by:
𝑆 = (𝑧𝑡 − 𝑧𝑖 )(𝜃𝑠 − 𝜃𝑟 )
where:
𝜃𝑠 and 𝜃𝑟 are the saturated and residual soil water contents, respectively.
The unsaturated store (𝑈 ) is subdivided into storage (𝑈𝑠 ) and deficit (𝑈𝑑 ) which are again expressed in units of depth:
𝑈𝑑 = (𝜃𝑠 − 𝜃𝑟 )𝑧𝑖 − 𝑈
𝑈𝑠 = 𝑈 − 𝑈𝑑
The saturation deficit (𝑆𝑑 ) for the soil profile as a whole is defined as:
𝑆𝑑 = (𝜃𝑠 − 𝜃𝑟 )𝑧𝑡 − 𝑆
All infiltrating water enters the 𝑈 store first. The unsaturated layer can be split-up in different layers, by providing the
thickness [mm] of the layers in the ini file. The following example specifies three layers (from top to bottom) of 100,
300 and 800 mm:
[model]
UStoreLayerThickness = 100,300,800
The code checks for each grid cell the specified layers against the SoilThickness, and adds or removes (partly) layer(s)
based on the SoilThickness.
Assuming a unit head gradient, the transfer of water (𝑠𝑡) from a 𝑈 store layer is controlled by the saturated hydraulic
conductivity 𝐾𝑠𝑎𝑡 at depth 𝑧 (bottom layer) or 𝑧𝑖 , the effective saturation degree of the layer, and a Brooks-Corey
power coefficient (parameter 𝑐) based on the pore size distribution index 𝜆 (Brooks and Corey (1964)):
(︂ )︂𝑐
𝜃 − 𝜃𝑟
𝑠𝑡 = 𝐾sat
𝜃𝑠 − 𝜃𝑟
2 + 3𝜆
𝑐=
𝜆
When the unsaturated layer is not split-up into different layers, it is possible to use the original Topog_SBM vertical
transfer formulation, by specifying in the ini file:
[model]
transfermethod = 1
Fig. 8: Schematisation of the soil and the connection to the river within the wflow_sbm model
Saturated conductivity (𝐾𝑠𝑎𝑡 ) declines with soil depth (𝑧) in the model according to:
𝐾𝑠𝑎𝑡 = 𝐾0 𝑒(−𝑓 𝑧)
where:
𝐾0 is the saturated conductivity at the soil surface and
𝑓 is a scaling parameter [𝑚𝑚−1 ]
The scaling parameter 𝑓 is defined by:
𝜃𝑠 −𝜃𝑟
𝑓= 𝑀
with 𝜃𝑠 and 𝜃𝑟 as defined previously and 𝑀 representing a model parameter (expressed in millimeter).
Figure: Plot of the relation between depth and conductivity for different values of M
The kinematic wave approach for lateral subsurface flow is described in the wflow_funcs Module Subsurface flow
routing
400 M = 350
-zi (depth)
600 M = 500
M = 650
800
1000
0 20 40 60 80 100
K
The potential eveporation left over after interception and open water evaporation (rivers and water bodies) is split in
potential soil evaporation and potential transpiration based on the canopy gap fraction (assumed to be identical to the
amount of bare soil).
For the case of one single soil layer, soil evaporation is scaled according to:
𝑆𝑎𝑡𝑢𝑟𝑎𝑡𝑖𝑜𝑛𝐷𝑒𝑓 𝑖𝑐𝑖𝑡
𝑠𝑜𝑖𝑙𝑒𝑣𝑎𝑝 = 𝑝𝑜𝑡𝑒𝑛𝑠𝑜𝑖𝑙𝑒𝑣𝑎𝑝 𝑆𝑜𝑖𝑙𝑊 𝑎𝑡𝑒𝑟𝐶𝑎𝑝𝑎𝑐𝑖𝑡𝑦
As such, evaporation will be potential if the soil is fully wetted and it decreases linear with increasing soil moisture
deficit.
For more than one soil layer, soil evaporation is only provided from the upper soil layer (often 100 mm) and soil
evaporation is split in evaporation from the unsaturated store and evaporation from the saturated store. First water is
evaporated water from the unsaturated store. Then the remaining potential soil evaporation can be used for evaporation
from the saturated store. This is only possible, when the water table is present in the upper soil layer (very wet
conditions). Both the evaporation from the unsaturated store and the evaporation from the saturated store are limited
by the minimum of the remaining potential soil evaporation and the available water in the unsaturated/saturated zone
of the upper soil layer. Also for multiple soil layers, the evaporation (both unsaturated and saturated) decreases linearly
with decreasing water availability.
The original Topog_SBM model does not include transpiration or a notion of capilary rise. In wflow_sbm transpiration
is first taken from the 𝑆 store if the roots reach the water table 𝑧𝑖 . If the 𝑆 store cannot satisfy the demand the 𝑈 store
is used next. First the number of wet roots is determined (going from 1 to 0) using a sigmoid function as follows:
Here the sharpness parameter (by default a large negative value, -80000.0) parameter determines if there is a stepwise
output or a more gradual output (default is stepwise). WaterTable is the level of the water table in the grid cell in
mm below the surface, RootingDepth is the maximum depth of the roots also in mm below the surface. For all values
of WaterTable smaller that RootingDepth a value of 1 is returned if they are equal a value of 0.5 is returned if the
WaterTable is larger than the RootingDepth a value of 0 is returned. The returned WetRoots fraction is multiplied
by the potential evaporation (and limited by the available water in saturated zone) to get the transpiration from the
saturated part of the soil:
Figure: Plot showing the fraction of wet roots for different values of c for a RootingDepth of 275 mm
Next the remaining potential evaporation is used to extract water from the unsaturated store. The fraction of roots
(AvailCap) that cover the unsaturated zone for each soil layer is used to calculate the potential root water extraction
rate (MaxExtr):
When setting Whole_UST_Avail to 1 in the ini file as follows, the complete unsaturated storage is available for
transpiration:
[model]
Whole_UST_Avail = 1
0.6
0.4
0.2
0.0
250 260 270 280 290 300
Water table depth below surface (zi in mm)
where:
ℎ is the pressure head (cm), ℎ𝑏 is the air entry pressure head, and 𝜃, 𝜃𝑠 , 𝜃𝑟 and 𝜆 as previously defined.
Feddes (1978) described a transpiration reduction-curve for the reduction coefficient 𝛼, as a function of ℎ. Below, the
function used in wflow_sbm, that calculates actual transpiration from the unsaturated zone layer(s).
"""
Actual transpiration function for unsaturated zone:
Input:
Output:
# Next step is to make use of the Feddes curve in order to decrease ActEvapUstore
˓→when soil moisture values
# occur above or below ideal plant growing conditions (see also Feddes et al., 1978).
˓→h1-h4 values are
# actually negative, but all values are made positive for simplicity.
h1 = hb # cm (air entry pressure)
h2 = 100 # cm (pF 2 for field capacity)
h3 = 400 # cm (pF 3, critical pF value)
h4 = 15849 # cm (pF 4.2, wilting point)
# According to Brooks-Corey
(continues on next page)
head = max(head,hb)
Capilary rise is determined using the following approach: first the 𝐾𝑠𝑎𝑡 is determined at the water table 𝑧𝑖 ; next a
potential capilary rise is determined from the minimum of the 𝐾𝑠𝑎𝑡 , the actual transpiration taken from the 𝑈 store,
the available water in the 𝑆 store and the deficit of the 𝑈 store. Finally the potential rise is scaled using the distance
between the roots and the water table using:
𝐶𝑆𝐹 = 𝐶𝑆/(𝐶𝑆 + 𝑧𝑖 − 𝑅𝑇 )
in which 𝐶𝑆𝐹 is the scaling factor to multiply the potential rise with, 𝐶𝑆 is a model parameter (default = 100, use
CapScale.tbl to set differently) and 𝑅𝑇 the rooting depth. If the roots reach the water table (𝑅𝑇 > 𝑧𝑖 ) 𝐶𝑆 is set to
zero thus setting the capilary rise to zero.
If the MaxLeakage parameter is set > 0, water is lost from the saturated zone and runs out of the model.
Soil temperature
The near surface soil temperature is modelled using a simple equation (Wigmosta et al., 2009):
where 𝑇𝑠𝑡 is the near-surface soil temperature at time t, 𝑇𝑎 is air temperature and 𝑤 is a weighting coefficient determined
through calibration (default is 0.1125 for daily timesteps).
A reduction factor (cf_soil, default is 0.038) is applied to the maximum infiltration rate (InfiltCapSoil and InfiltCap-
Path), when the following model settings are specified in the ini file:
[model]
soilInfRedu = 1
ModelSnow = 1
A S-curve (see plot below) is used to make a smooth transition (a c-factor (c) of 8 is used):
1.0
𝑏=
(1.0 − 𝑐𝑓 _𝑠𝑜𝑖𝑙)
1.0
𝑠𝑜𝑖𝑙𝐼𝑛𝑓 𝑅𝑒𝑑𝑢 = + 𝑐𝑓 _𝑠𝑜𝑖𝑙
𝑏 + 𝑒𝑥𝑝(−𝑐(𝑇𝑠 − 𝑎))
𝑎 = 0.0
𝑐 = 8.0
Water demand (surface water only) by irrigation can be configured in two ways:
1. By specifying the water demand externally (as a lookup table, series of maps etc)
2. By defining irrigation areas. Within those areas the demand is calculated as the difference between potential ET
and actual transpiration
For both options a fraction of the supplied water can be put back into the river at specified locations
The following maps and variables can be defined:
• wflow_irrigationareas.map: Map of areas where irrigation is applied. Each area has a unique id. The areas do
not need to be continuous& but all cells with the same id are assumed to belong to the same irrigation area.
• wflow_irrisurfaceintake.map: Map of intake points at the river(s). The id of each point should correspond to
the id of an area in the wflow_irrigationareas map.
• wflow_irrisurfacereturns.map: Map of water return points at the river(s). The id of each point should corre-
spond to the id of an area in the wflow_irrigationareas map or/and the wflow_irrisurfaceintake.map.
• IrriDemandExternal: Irrigation demand supplied to the model. This can be doen by adding an entry to the
modelparameters section. if this is doen the irrigation demand supplied here is used and it is NOT determined
by the model. Water demand should be given with a negative sign! See below for and example entry in the
modelparameters section:
0.6
0.4
0.2
0.0
3 2 1 0 1 2 3
Temperature C
In this example the default demand is -34 m3 s−1 . The demand must be linked to the map
wflow_irrisurfaceintakes.map. Alternatively we can define this as a timeseries of maps:
IrriDemandExternal=/inmaps/IRD,timeseries,-34.0,0
• DemandReturnFlowFraction: Fraction of the supplied water the returns back into the river system (between
0 and 1). This fraction must be supplied at the wflow_irrisurfaceintakes.map locations but the water that is
returned to the river will be returned at the wflow_irrisurfacereturns.map locations. If this variable is not defined
the default is 0.0. See below for an example entry in the modelparameters section:
DemandReturnFlowFraction=intbl/IrriDemandReturn.tbl,tbl,0.0,0,staticmaps/wflow_
˓→irrisurfaceintakes.map
Fig. 9: Figure showing the three maps that define the irrigation intake points areas and return flow locations.
Paddy areas (irrigated rice fields) can be defined by including the following maps:
• wflow_irrigationpaddyareas.map: Map of areas where irrigated rice fields are located.
• wflow_hmax.map: Map with the optimal water height [mm] in the irrigated rice fields.
• wflow_hp.map: Map of the water height [mm] when rice field starts spilling water (overflow).
• wflow_hmin.map: Map with the minimum required water height in the irrigated rice fields.
• wflow_irrisurfaceintake.map: Map of intake points at the river(s). The id of each point should correspond to
the id of an area in the wflow_irrigationpaddyareas map.
Furthermore, gridded timeseries whether rice crop growth occurs (value = 1), or not (value = 0), are required. These
timeseries can be included as follows:
[modelparameters]
CRPST=inmaps/CRPSTART,timeseries,0.0,1
Wflow_sbm will estimate the irrigation water demand as follows, a ponding depth (self.PondingDepth) is simulated in
the grid cells with a rice crop. Potential evaporation left after interception and open water evaporation (self.RestEvap),
is subtracted from the ponding depth as follows:
if self.nrpaddyirri > 0:
self.ActEvapPond = pcr.min(self.PondingDepth, self.RestEvap)
self.PondingDepth = self.PondingDepth - self.ActEvapPond
self.RestEvap = self.RestEvap - self.ActEvapPond
Infiltration excess and saturation excess water are added to the ponding depth in grid cells with a rice crop, to the
maximum water height when the rice field starts spilling water.
The irrigation depth is then determined as follows:
if self.nrpaddyirri > 0:
irr_depth = (
pcr.ifthenelse(
self.PondingDepth < self.h_min, self.h_max - self.PondingDepth, 0.0
)
* self.CRPST
)
The irrigation depth is converted to m3 s−1 for each irrigated paddy area, at the corresponding intake point at the river.
The demand is converted to a supply (taking into account the available water in the river) and converted to an amount
in mm over the irrigation area. The supply is converted in the next timestep as extra water available for infiltration in
the irrigation area.
This functionality was added to simulate rice crop production when coupled (through Basic Model Interface (BMI))
to the The wflow_lintul Model.
Both overland flow and river flow are routed through the catchment using the kinematic wave equation. For overland
flow, width (self.SW) and length (self.DL) characteristics are based on the grid cell dimensions and flow direction, and
in case of a river cell, the river width is subtracted from the overland flow width. For river cells, both width and length
can either be supplied by separate maps:
• wflow_riverwidth.map
• wflow_riverlength.map
or determined from the grid cell dimension and flow direction for river length and from the DEM, the upstream area
and yearly average discharge for the river width (Finnegan et al., 2005):
The yearly average Q at outlet is scaled for each point in the drainage network with the upstream area. 𝛼 ranges from
5 to > 60. Here 5 is used for hardrock, large values are used for sediments.
When the river length is calculated based on grid dimensions and flow direction in wflow_sbm, it is possible to provide
a map with factors to multiply the calculated river lenght with (wflow_riverlength_fact.map, default is 1.0).
The slope for kinematic overland flow and river flow can either be provided by maps:
• Slope.map (overland flow)
• RiverSlope.map (river flow)
or calculated by wflow_sbm based on the provided DEM and Slope function of PCRaster.
Note: If a river slope is available as map, then we recommend to also provide a river lenght map to avoid possible
inconsistencies between datasets. If a river lenght map is not provided, the river length is calculated based on grid
dimensions and flow direction, and if available the wflow_riverlength_fact.map.
The table below list commonly used Manning’s N values (in the N_River.tbl file). Please note that the values for non
river cells may arguably be set significantly higher. (Use N.tbl for non-river cells and N_River.tbl for river cells)
Natural lakes and reservoirs can also be added to the model and taken into account during the routing process. For
more information, see the documentation of the wflow_funcs module.
Subcatchment flow
Normally the the kinematic wave is continuous throughout the model. By using the the SubCatchFlowOnly entry in
the model section of the ini file all flow is at the subcatchment only and no flow is transferred from one subcatchment
to another. This can be handy when connecting the result of the model to a water allocation model such as Ribasim.
Example:
[model]
SubCatchFlowOnly = 1
The figure below shows the stores and fluxes in the model in terms of internal variable names.
Although the model has been setup to do as little data processing as possible it includes an option to apply an altitude
correction to the temperature inputs. The three squares below demonstrate the principle.
5 5 5 1 4 7 4 1 -2
5 5 5 2 5 8 3 0 -3
5 5 5 3 6 9 2 -1 -4
wflow_sbm takes the correction grid as input and applies this to the input temperature. The correction grid has to be
made outside of the model. The correction grid is optional.
The temperature correction map is specified in the model section of the ini file:
[model]
TemperatureCorrectionMap=NameOfTheMap
If the entry is not in the file the correction will not be applied
Note: If SoilThickness and SoilMinThickness are not equal, wflow_sbm will scale SoilThickness based on the
topographic wetness index.
WI = pcr.ln(
pcr.accuflux(self.TopoLdd, 1) / self.landSlope
) # Topographic wetnesss index. Scale WI by zone/subcatchment assuming these are
˓→also geological units
Introduction
As with all hydrological models calibration is needed for optimal performance. We have calibrated different
wflow_sbm models using simple shell scripts and command-line parameters to multiply selected model parameters
and evaluate the results later.
Parameters
SoilThickness Increasing the soil depth and thus the storage capacity of the soil will decrease the outflow.
M Once the depth of the soil has been set (e.g. for different land-use types) the M parameter is the most important
variable in calibrating the model. The decay of the conductivity with depth controls the baseflow resession and
part of the stormflow curve.
N and N_River The Manning N parameter controls the shape of the hydrograph (the peak parts). In general it is
advised to set N to realistic values for the rivers, for the land phase higher values are usually needed.
KsatVer and KsatHorFrac Increasing KsatVer and or KsatHorFrac will lower the hydrograph (baseflow) and flatten
the peaks. The latter also depend on the shape of the catchment.
References
• Vertessy, R.A. and Elsenbeer, H., 1999, Distributed modelling of storm flow generation in an Amazonian rain-
forest catchment: effects of model parameterization, Water Resources Research, vol. 35, no. 7, pp. 2173–2187.
• Brooks, R., and Corey, T., 1964, Hydraulic properties of porous media, Hydrology Papers, Colorado State
University, 24, doi:10.13031/2013. 40684.
• Chow, V., Maidment, D. and Mays, L., 1988, Applied Hydrology. McGraw-Hill Book Company, New York.
• Finnegan, N.J., Roe, G., Montgomery, D.R., and Hallet, B., 2005, Controls on the channel width of rivers: Im-
plications for modeling fluvial incision of bedrock, Geology, v. 33; no. 3; p. 229–232; doi: 10.1130/G21171.1.
• Wigmosta, M. S., L. J. Lane, J. D. Tagestad, and A. M. Coleman, 2009, Hydrologic and erosion models to assess
land use and management practices affecting soil erosion, Journal of Hydrologic Engineering, 14(1), 27-41.
Introduction
An experimental implementation of the gr4 model. It is based on the hourly (gr4h) version
Dependencies
[PM]
Configuration
The model needs a number of settings in the ini file. The default name for the ini file is wflow_gr4.ini.
See below for an example:
[model]
Tslice=1
# Maximum upstream distance to update the flow in metres
[gr4]
dt = 1
B = 0.9
D = 1.25
X4 = 32.83
# X1,X2 and X3 are given as .tbl files or maps
[layout]
# if set to zero the cell-size is given in lat/long (the default)
sizeinmetres = 1
[outputmaps]
# Add maps here
# List all timeseries in tss format to save in this section. Timeseries are
# produced as averages per subcatchment. The discharge (run) timeseries
# is always saved (as samples at the gauge location)s.
[outputtss]
self.S_X1=S_X1
self.R_X3=R_X3
self.Pr=Pr
self.Q=Q
Introduction
The wflow_w3ra and wflow_w3 models are adaptations of the Australian Water Resources Assessment Landscape
model (AWRA-L). The AWRA-L model is developped through the Water Information Research and Development
Alliance, the Bureau of Meteorology and CSIRO. It an hydrologic model that produces water balance component esti-
mates for surface runoff, root water uptake, soil water drainage, groundwater discharge, capillary rise and streamflow.
Radiation and energy balance, Vapour fluxes and vegetation phenology are also included.
The following Figure describes the water stores and fluxes considered in AWRA-L:
Dependencies
Configuration
It needs a number of settings in the ini file. The default name for the file is wflow_w3ra.ini or wflow_w3.ini.
Examples are available in \wflow\examples\wflow_w3ra\ and \wflow\examples\wflow_rhine_w3\.
Introduction
Topoflex applies the concept of using different model configurations for different hydrological response units (HRUs)
. These HRUs can for a large part be derived from topography ([savenije]); however, additional data such as land use
and geology can further optimize the selection of hydrological response units. In contrast to other models, topoflex
generally uses a minimum amount of HRUs, which are defined manually, i.e. 2-5 depending on the size of and the
variablity within the catchment. Nevertheless, the model code is written such that it can handle an infinte number of
classes. The individual HRUs are treated as parallel model structures and only connected via the groundwater reservoir
and the stream network.
The model code is written in a modular setup: different conceptualisations can be selected for each reservoir for
each HRU. The code currently contains some reservoir conceptualisations (see below), but new conceptualisations can
easily be added.
Examples of the application of topoflex can be found in Gharari et al. (2014), Gao et al. (2014) and Euser et al. (2015).
Figure 1 shows a possible model conceptualistion: one that used two HRUs (wetland (W) and hillslope (H), adapted
from [euser])
Fig. 11: Schematic view of the relevant components of the topoflex model
• Using a set of HRUs introduces additional complexity (structural and computational) in the model. Therefore,
calibration of the model should be carried out carefully (see below for some tips and tricks that might help).
• The selection and deliniation of the HRUs is a relatively subjective exercise: different data sources and
preferably some expert knowledge migth help to construct meaningful HRUs.
Spatial resolution
The model uses a grid based on the available input or required output data. The combination with the HRUs has to
be done in the preparation step: for each cell the percentage of each class needs to be determined and stored in a
staticmap.
The cell sizes do not have to have the same size: the size of individual cells can be stored in a staticmap, which is used
to calculate the contribution of each cell to the generated discharge.
Input data
The required input data consists of timeseries of precipitation, temperature potential evaporation. In case the Jarvis
equations are used to determine transpiration more input variable are required, such as humidity, radiation, wind speed
and LAI.
Different HRUs
Currently there are conceptualisations for three main HRUs: wetland, hillslope and (agricultural) plateau. These
conceptualisations are simply a set of reservoirs, which match the perception of different landscape elements (in
western Europe). Dependent on the area or interest the HRUs can be adapted, was well as the set of reservoirs per
HRU.
wetland
Wetlands are areas close to the stream, where the groundwater levels are assumed to be shallow and assumed to rise
quickly during an event. The dominant runoff process in wetlands is assumed to be saturation overland flow, leading
to quick and sharp responses to rainfall.
hillslope
Hillslopes are steep areas in an catchment, generally covered with forest. The dominant runoff process is assumed to
be quick subsurface flow: the hillslopes mainly contribute to the discharge during the winter period.
Plateus are flat areas high above the stream, thus with deep ground water levels. Depending on the specific conditions,
the dominant runoff processes are ground water recharge, quick subsurface flow and hortonian overland flow. The
latter is especially important in agricultural areas.
The routing of generated discharge is based on the average velocity of water through the river, which is currently set
to 1 m/s. For each cell the average distance to the outlet is calculated and multiplied with the selected flow velocity to
determine the delay at the outlet.
There are currently two options to apply this routing:
1. only calculating the delay relevant for the discharge at the outlet
2. calculating the delay (and thus discharge) over the stream network. This option is mainly relevant for calcula-
tions with a finer grid
Including more HRUs in the model leads to an increase in parameters. To make it still feasible to calibrate the model,
a set of constraints is introduced: parameter and process constraints. These constraints are assumed relations
between parameters and fluxes of different HRUs and prevent the selection of unreaslistic parameters.
The constraints are an important part of the perceptual model, but are not (yet) included in de wflow code.
Below some examples of constraints are given, more examples of constraints can be found in Gharari et
al. (2014), Gao et al. (2014) and Euser et al. (2015).
Parameter constraints
Parameter constraints are relations between parameters of different HRUs, for example the root zone storage capacity
( S_{u,max}), which is assumed to be larger on hillslopes than in wetlands. As in the latter groundwater levels quicky
rise during a rain event, reducing the root zone storage capacity. Parameter constraints are calculated before the model
runs.
Process constraints
Process constraints are comparable with parameter constraints, but focus on fluxes from different HRUs, for example
the fast response from the wetlands is assumed to be larger than the fast response of the hillslopes in the summer
period. As on the hillslopes generally more storage is required before a runoff generation threshold is exceeded.
Process constraints are calculated after the model runs.
• Euser, T., Hrachowitz, M., Winsemius, H. C. and Savenije, H. H. G.: The effect of forcing and landscape
distribution on performance and consistency of model structures. Hydrol. Process., doi: 10.1002/hyp.10445,
2015.
• Gao, H., Hrachowitz, M., Fenicia, F., Gharari, S., and Savenije, H. H. G.: Testing the realism of a topography-
driven model (FLEX-Topo) in the nested catchments of the Upper Heihe, China, Hydrol. Earth Syst. Sci., 18,
1895-1915, doi:10.5194/hess-18-1895-2014, 2014.
• Gharari, S., Hrachowitz, M., Fenicia, F., Gao, H., and Savenije, H. H. G.: Using expert knowledge to increase
realism in environmental system models can dramatically reduce the need for calibration, Hydrol. Earth Syst.
Sci., 18, 4839-4859, doi:10.5194/hess-18-4839-2014, 2014.
• Savenije, H. H. G.: HESS Opinions “Topography driven conceptual modelling (FLEX-Topo)”, Hydrol. Earth
Syst. Sci., 14, 2681-2692, doi:10.5194/hess-14-2681-2010, 2010.
InputSeries=1
# InterpolationMethod: inv = inverse distance, pol = Thiessen polygons, this option
˓→is not relevant if the number of cells is equal to the number of meteo gauges
InterpolationMethod=inv
L_IRURFR = 0
L_URFR = 0
L_FR = 0
# maxTransitTime is the travel time (same time resolution as model calculations)
˓→through the stream of the most upstream point, rounded up
maxTransitTime = 9
# DistForcing is the number of used rainfall gauges
DistForcing = 3
# maxGaugeId is the id of the raingauge with the highest id-number, this setting has
˓→to do with writing the output data for the correct stations
maxGaugeId = 10
# timestepsecs is the number of seconds per time step
timestepsecs = 3600
## the settings below deal with the selection of classes and conceptualisations of
˓→reservoirs
## classes indicates the number of classes, the specific characters are not important
˓→(will be used for writing output??), but the number of sets of characters is
(continues on next page)
˓→important
## wflow maps with percentages, the numbers correspond to the indices of the
˓→characters in classes.
wflow_percent_0 = staticmaps/wflow_percentW4.map
wflow_percent_1 = staticmaps/wflow_percentHPPPA4.map
[layout]
# if set to zero the cell-size is given in lat/long (the default)
sizeinmetres = 0
[outputmaps]
#self.Si_diff=sidiff
#self.Pe=pe
#self.Ei=Ei
(continues on next page)
# List all timeseries in tss format to save in this section. Timeseries are
# produced per subcatchment.
[outputtss_0]
samplemap=staticmaps/wflow_mgauges.map
#states
self.Si[0]=SiW.tss
self.Si[1]=SiH.tss
self.Sf[1]=SfH.tss
self.Sf[0]=SfW.tss
self.Su[1]=SuH.tss
self.Su[0]=SuW.tss
#fluxen
self.Precipitation=Prec.tss
self.Qu_[0]=QuW.tss
self.Qu_[1]=QuH.tss
self.Ei_[0]=EiW.tss
self.Ei_[1]=EiH.tss
self.Eu_[0]=EuW.tss
self.Eu_[1]=EuH.tss
self.Pe_[0]=peW.tss
self.Pe_[1]=peH.tss
self.Perc_[1]=PercH.tss
self.Cap_[0]=CapW.tss
self.Qf_[1] = QfH.tss
self.Qf_[0] = QfW.tss
self.Qfcub = Qfcub.tss
self.Qtlag = Qtlag.tss
[outputtss_1]
samplemap = staticmaps/wflow_gauges.map
#states
self.Ss = Ss.tss
#fluxen
self.Qs = Qs.tss
self.QLagTot = runLag.tss
self.WBtot = WB.tss
[modelparameters]
# Format:
# name=stack,type,default,verbose[lookupmap_1],[lookupmap_2],lookupmap_n]
# example:
# RootingDepth=monthlyclim/ROOT,monthyclim,100,1
# - tss: read a tss file and link to lookupmap (only one allowed) a map using
˓→timeinputscalar
Precipitation=intss/1_P.tss,tss,0.0,1,staticmaps/wflow_mgauges.map
Temperature=intss/1_T.tss,tss,10.5,1,staticmaps/wflow_mgauges.map
PotEvaporation=intss/1_PET.tss,tss,0.0,1,staticmaps/wflow_mgauges.map
Introduction
In 2018 the following PCR-GLOBWB ((PCRaster Global Water Balance) version was added to the wflow framework:
https://github.com/UU-Hydro/PCR-GLOBWB_model/tree/v2.1.0_beta_1
or default initial conditions (set in the code) are used when these are not set in the ini file. Warm states can be
set in the ini file as follows:
[forestOptions]
# initial conditions:
interceptStorIni = landSurface.interceptStor_forest.map
snowCoverSWEIni = landSurface.snowCoverSWE_forest.map
snowFreeWaterIni = landSurface.snowFreeWater_forest.map
topWaterLayerIni = landSurface.topWaterLayer_forest.map
storUppIni = landSurface.storUpp005030_forest.map
storLowIni = landSurface.storLow030150_forest.map
interflowIni = landSurface.interflow_forest.map
[framework]
netcdfoutput = outmaps.nc
netcdfinput = inmaps/forcing.nc
netcdfwritebuffer=20
EPSG = EPSG:4326
[run]
# either a runinfo file or a start and end-time are required
starttime= 2002-01-01 00:00:00
endtime= 2002-01-30 00:00:00
reinit = 0
timestepsecs = 86400
runlengthdetermination=steps
[model]
modeltype = wflow_pcrglobwb
[layout]
# if set to zero the cell-size is given in lat/long (the default)
sizeinmetres = 0
[outputmaps]
self.routing.subDischarge = Qro
self.routing.waterBodyStorage = wbs
self.landSurface.storUpp = su1
self.landSurface.storLow = slo
(continues on next page)
[globalOptions]
[landSurfaceOptions]
debugWaterBalance = True
numberOfUpperSoilLayers = 2
topographyNC = topoProperties5ArcMin.nc
soilPropertiesNC = soilProperties5ArcMin.nc
includeIrrigation = True
# netcdf time series for historical expansion of irrigation areas (unit: hectares).
# Note: The resolution of this map must be consisten with the resolution of cellArea.
historicalIrrigationArea = irrigationArea05ArcMin.nc
includeDomesticWaterDemand = True
includeIndustryWaterDemand = True
includeLivestockWaterDemand = True
allocationSegmentsForGroundSurfaceWater = uniqueIds60min.nom_5min.map
irrigationSurfaceWaterAbstractionFractionData = AEI_SWFRAC_5min.map
irrigationSurfaceWaterAbstractionFractionDataQuality = AEI_QUAL_5min.map
maximumNonIrrigationSurfaceWaterAbstractionFractionData = max_city_sw_fraction_5min.
˓→map
[forestOptions]
name = forest
debugWaterBalance = True
landCoverMapsNC = forestProperties5ArcMin.nc
cropCoefficientNC = cropKC_forest_daily366.nc
interceptCapNC = interceptCap_forest_daily366.nc
coverFractionNC = coverFraction_forest_daily366.nc
# initial conditions:
interceptStorIni = landSurface.interceptStor_forest.map
snowCoverSWEIni = landSurface.snowCoverSWE_forest.map
snowFreeWaterIni = landSurface.snowFreeWater_forest.map
topWaterLayerIni = landSurface.topWaterLayer_forest.map
storUppIni = landSurface.storUpp005030_forest.map
storLowIni = landSurface.storLow030150_forest.map
interflowIni = landSurface.interflow_forest.map
[grasslandOptions]
name = grassland
debugWaterBalance = True
landCoverMapsNC = grasslandProperties5ArcMin.nc
#
# Parameters for the Arno's scheme:
arnoBeta = None
# If arnoBeta is defined, the soil water capacity distribution is based on this.
# If arnoBeta is NOT defined, maxSoilDepthFrac must be defined such that arnoBeta
˓→will be calculated based on maxSoilDepthFrac and minSoilDepthFrac.
cropCoefficientNC = cropKC_grassland_daily366.nc
interceptCapNC = interceptCap_grassland_daily366.nc
coverFractionNC = coverFraction_grassland_daily366.nc
# initial conditions:
interceptStorIni = landSurface.interceptStor_grassland.map
snowCoverSWEIni = landSurface.snowCoverSWE_grassland.map
snowFreeWaterIni = landSurface.snowFreeWater_grassland.map
topWaterLayerIni = landSurface.topWaterLayer_grassland.map
#storUpp000005Ini = landSurface.storUpp000005_grassland.map
storUppIni = landSurface.storUpp005030_grassland.map
storLowIni = landSurface.storLow030150_grassland.map
interflowIni = landSurface.interflow_grassland.map
[irrPaddyOptions]
name = irrPaddy
debugWaterBalance = True
#
# other paramater values
minTopWaterLayer = 0.05
minCropKC = 0.2
minInterceptCap = 0.0002
cropDeplFactor = 0.2
cropCoefficientNC = cropKC_irrPaddy_daily366.nc
# initial conditions:
interceptStorIni = landSurface.interceptStor_irrPaddy.map
snowCoverSWEIni = landSurface.snowCoverSWE_irrPaddy.map
snowFreeWaterIni = landSurface.snowFreeWater_irrPaddy.map
topWaterLayerIni = landSurface.topWaterLayer_irrPaddy.map
(continues on next page)
[irrNonPaddyOptions]
name = irrNonPaddy
debugWaterBalance = True
#
# other paramater values
minTopWaterLayer = 0.0
minCropKC = 0.2
minInterceptCap = 0.0002
cropDeplFactor = 0.5
cropCoefficientNC = cropKC_irrNonPaddy_daily366.nc
# initial conditions:
interceptStorIni = landSurface.interceptStor_irrNonPaddy.map
snowCoverSWEIni = landSurface.snowCoverSWE_irrNonPaddy.map
snowFreeWaterIni = landSurface.snowFreeWater_irrNonPaddy.map
topWaterLayerIni = landSurface.topWaterLayer_irrNonPaddy.map
#storUpp000005Ini = landSurface.storUpp000005_irrNonPaddy.map
storUppIni = landSurface.storUpp005030_irrNonPaddy.map
storLowIni = landSurface.storLow030150_irrNonPaddy.map
interflowIni = landSurface.interflow_irrNonPaddy.map
[groundwaterOptions]
debugWaterBalance = True
groundwaterPropertiesNC = groundwaterProperties5ArcMin_5min.nc
limitFossilGroundWaterAbstraction = True
minimumTotalGroundwaterThickness = 0.000
estimateOfTotalGroundwaterThickness = thickness_05min_5min.map
estimateOfRenewableGroundwaterCapacity = 0.0
# annual pumping capacity for each region (unit: billion cubic meter per year),
˓→should be given in a netcdf file (continues on next page)
# initial conditions:
storGroundwaterIni = groundwater.storGroundwater.map
storGroundwaterFossilIni = groundwater.storGroundwaterFossil.map
#
avgNonFossilGroundwaterAllocationLongIni = groundwater.avgNonFossilAllocation.map
avgNonFossilGroundwaterAllocationShortIni = groundwater.avgNonFossilAllocationShort.
˓→map
avgTotalGroundwaterAbstractionIni = groundwater.avgAbstraction.map
avgTotalGroundwaterAllocationLongIni = groundwater.avgAllocation.map
avgTotalGroundwaterAllocationShortIni = groundwater.avgAllocationShort.map
allocationSegmentsForGroundwater = uniqueIds30min.nom_5min.map
#~ allocationSegmentsForGroundwater = None
[routingOptions]
debugWaterBalance = True
lddMap = lddsound_05min.map
cellAreaMap = cellarea05min.map
gradient = ChannelGradient_05min.map
# manning coefficient
manningsN = 0.04
routingMethod = accuTravelTime
# TODO: including kinematicWave
#~ # Maximum length of a sub time step in seconds (optional and only used if either
˓→kinematicWave or simplifiedKinematicWave is used)
#~ # - Note that too long sub time step may create water balance errors.
#~ # - Default values: 3600 seconds for 30 arcmin ; 720 seconds for 5 arcmin
#~ maxiumLengthOfSubTimeStep = 3600.
#~ maxiumLengthOfSubTimeStep = 720.
# number of days (timesteps) that have been performed for spinning up initial
˓→conditions in the routing module (i.e. channelStorageIni, avgDischargeLongIni,
˓→avgDischargeShortIni, etc.)
timestepsToAvgDischargeIni = routing.timestepsToAvgDischarge.map
# Note that:
# - maximum number of days (timesteps) to calculate long term average flow values
˓→(default: 5 years = 5 * 365 days = 1825)
Introduction
In 2017 the hydrological model SPHY (Spatial Processes in HYdrology) version 2.1 was added to the wflow frame-
work:
https://github.com/FutureWater/SPHY/tree/v2.1
An example model is available in \wflow\examples\wflow_ganga_sphy\.
Introduction
STREAM (Spatial Tools for River Basins and Environment and Analysis of Management Options) has been added to
the wflow framework as part of the MSc. thesis work “Routing and calibration of distributed hydrological models” by
Alkemade (2019).
STREAM is a hydrological model that was developed by Aerts et al. (1999). It is a distributed grid-based water-
balance model, and as shown in the flow chart of STREAM it consists of a snow, soil and groundwater reservoir,
and has a fast and slow runoff component. The soil moisture storage affects the fast component and ground water
storage affects the slow component. Both behave as linear reservoirs. Potential evapotranspiration is derived from
temperature, based on the Thornthwaite and Mather (1957) approach. Snow accumulation is equal to precipitation
when temperature is below a threshold (e.g. zero degrees Celsius), and snow melts linearly depending on temperature.
The fast and slow flows are routed to the catchment outlet by flow accumulation (based on a DEM) and assuming that
all water moves through the system in one model time step (monthly).
Implementation in wflow
To add STREAM to the wflow framework, the model was re-written in the PCRaster Python framework. STREAM
has until now mainly been used on monthly time step and calculates discharge by flow accumulation of upstream
runoff. The following adjustment was made to run STREAM at a daily timestep:
• The original STREAM does not require any evaporation data and instead estimates it from temperature data on
a monthly basis using the Thornthwaite and Mather (1957) approach. For wflow the calculation of evapotranspi-
ration via temperature was taken out and potential evapotranspiration data is used as forcing to STREAM. This
means that the wflow version of STREAM now requires not two but three forcings (precipitation, temperature
and potential evapotranspiration).
Wflow_stream requires the following static maps, for the following parameters:
• whc [mm]
• C [day]
• cropf [-]
• meltf [-]
• togwf [-]
The parameter C is a recession coefficient and used in the draining of the groundwater reservoir. In the original
STREAM model, this parameter is based on a combination of a categorical variable C with values 1, 2, 3 and 4 from
the average slope in a cell (from a DEM), with higher slopes getting lower values and therefore a faster draining of the
groundwater reservoir, and a global parameter Ccal that steers how fast groundwater flows (C * Ccal). The parameter
whc represents the maximum soil water holding capacity. The parameter cropf represents the crop factor, to determine
the actual evapotranspiration from the potential evapotranspiration. Parameter meltf is a melt factor that controls the
rate of snow melt. Parameter togwf seperates the fraction of excess water going to groundwater and direct runoff.
Snow accumulates below a surface temperature of 3.0 𝑜 𝐶. Snow melts linearly based on surface temperature and a
melt factor self.MELTcal.
# snow routine
snowfall = self.precipitation
snowfall = scalar(self.temperature < 3.0) * snowfall
self.snow = self.snow + snowfall
melt = (self.MELTcal * self.temperature)
melt = scalar(self.temperature > 3.0) * melt
melt = max(0.0, min(self.snow, melt))
self.snow = self.snow - melt
self.precipitation = self.precipitation - snowfall + melt
The Thornthwaite-Mather procedure (Thornthwaite and Mather, 1955) is used for modelling the soil water balance:
1. When 𝑃 − 𝑃 𝐸𝑇 < 0 (the soil is drying), available water soil water and excess water is:
(𝑃 − 𝑃 𝐸𝑇 )
𝐴𝑊𝑡 = 𝐴𝑊𝑡−1 𝑒𝑥𝑝( )
𝑊 𝐻𝐶
𝐸𝑥𝑐𝑒𝑠𝑠 = 0
𝐴𝑊𝑡−1 + (𝑃 − 𝑃 𝐸𝑇 ) < 𝑊 𝐻𝐶
𝐴𝑊𝑡−1 + (𝑃 − 𝑃 𝐸𝑇 ) > 𝑊 𝐻𝐶
Excess water (self.excess) is seperated by a fraction (self.TOGWcal) into seepage (self.togw) to groundwater
(self.Ground_water) and direct runoff. Seepage is added to the groundwater reservoir, and flow from the ground-
water reservoir (self.sloflo) is modelled as a linear reservoir. Total runoff (self.runoff) consists of direct runoff and
groundwater flow.
# seepage to groundwater
self.runoff = self.togwf * self.excess
self.togw = self.excess - self.runoff
References
• Aerts, J.C.J.H., Kriek, M., Schepel, M., 1999, STREAM (Spatial tools for river basins and environment and
analysis of management options): ‘set up and requirements’, Phys. Chem. Earth, Part B Hydrol. Ocean.
Atmos., 24(6), 591–595.
• Alkemade, G.I., 2019, Routing and calibration of distributed hydrological models, MSc. Thesis, VU Amster-
dam, Faculty of Science, Hydrology.
• Thornthwaite, C.W., and Mather, J.R., 1955, The water balance, Publ. Climatol., 8(1), 1–104.
• Thornthwaite, C.W., and Mather, J.R., 1957, Instructions and tables for computing potential evapotranspiration
and the water balance: Centerton, N.J., Laboratory of Climatology, Publ. Climatol., 10(3), 85–311.
Introduction
The wflow routing module uses the pcraster kinematic wave to route water over a DEM. By adding a bankfull level
and a floodplainwidth to the configuration the model can also include estimated flow over a floodplain. In addition,
simple reservoirs can be configured.
Method
A simple river profile is defined using a river width a bankfull heigth and a floodplainwidth. A schematic drawing is
shown in the figure below.
First the maximum bankfull flow for each cell is determined using:
𝐻𝑏
𝑄𝑏 = ( * 𝐵𝑤)1/𝛽
𝛼𝑐ℎ
Next the channel flow is determined by taking the mimumm value of the total flow and the maximum banfull flow and
the floodplain flow is determined by subtracting the bankfull flow from the total flow.
Where 𝐻𝑐ℎ is the water level for the channel, 𝛼𝑐ℎ is the kinematic wave coefficient for the channel, 𝑄𝑐ℎ is the discharge
in the channel and 𝐵𝑤 is the width of the river.
If the water level is above bankfull the water level on the floodplains is calculated as follows:
𝐻𝑓 𝑝 = 𝛼𝑓 𝑝 𝑄𝑓 𝑝 𝛽 /(𝐵𝑤 + 𝑃𝑓 𝑝 )
where 𝐻𝑐ℎ is the water level on the floodplain, 𝑄𝑓 𝑝 is the discharge on the floodplain and 𝑃𝑓 𝑝 is the wetted perimiter
of the floodplain and 𝛼𝑓 𝑝 is the kinematic wave coefficient for the floodplain,
The wetted perimiter of the channel, 𝑃𝑐ℎ , is determined by:
𝑃𝑐ℎ = 2.0𝐻𝑐ℎ + 𝐵𝑤
This first equation defines the upper half of an S or sigmoid curve and will return values between 0.001 and 1.0. The
c parameter defines the sharpness of the function, a high value of c will turn this into a step-wise function while a
low value will make the function more smooth. The default value for c = 0.5. For example, with this default value a
floodplain level of 1m will result in an N value of 0.25 and 2m will return 0.46. In the second equation this fraction is
multiplied with the maximum floodplain width 𝑊𝑓 𝑝 .
The 𝛼 for the channel and floodplain are calculated as follows:
(2.0/3.0)𝛽
√︀
𝛼𝑐ℎ = (𝑛𝑐ℎ / 𝑠𝑙𝑜𝑝𝑒)𝛽 𝑃𝑐ℎ
(2.0/3.0)𝛽
√︀
𝛼𝑓 𝑝 = (𝑛𝑓 𝑝 / 𝑠𝑙𝑜𝑝𝑒)𝛽 𝑃𝑓 𝑝
In which slope is the slope of the river bed and floodplain and 𝑛𝑐ℎ and 𝑛𝑓 𝑝 represent the manning’s n for the channel
and floodplain respectively.
A compound 𝛼𝑡𝑜𝑡𝑎𝑙 is estimated by first calculating a compound n value 𝑛𝑡𝑜𝑡𝑎𝑙 :
3/2 3/2
𝑛𝑡𝑜𝑡𝑎𝑙 = (𝑃𝑐ℎ /𝑃𝑡𝑜𝑡𝑎𝑙 𝑛𝑐ℎ + 𝑃𝑓 𝑝 /𝑃𝑡𝑜𝑡𝑎𝑙 𝑛𝑓 𝑝 )2/3
√︀
𝛼𝑡𝑜𝑡𝑎𝑙 = (𝑛𝑡𝑜𝑡𝑎𝑙 / 𝑠𝑙𝑜𝑝𝑒)𝛽 (𝑃𝑓 𝑝 + 𝑃𝑐ℎ )(2.0/3.0)𝛽
The 𝛼𝑡𝑜𝑡𝑎𝑙 is used in the pcraster kinematic function to get the discharge for the next timestep.
The code is implemented in the updateRunoff attribute of the model class as follows:
# Alpha
self.WetPComb = self.Pch + self.Pfp
self.Ncombined = (self.Pch/self.WetPComb*self.N**1.5 + self.Pfp/self.WetPComb*self.
˓→NFloodPlain**1.5)**(2./3.)
self.OldKinWaveVolume = self.KinWaveVolume
self.KinWaveVolume = (self.WaterLevelCH * self.Bw * self.DCL) + (self.WaterLevelFP *
˓→(self.Pfp + self.Bw) * self.DCL)
Reservoirs
Simple reservoirs can be included within the kinematic wave routing by supplying a map with the locations of the
reservoirs in which each reservoir has a unique id. Furthermore a set of lookuptables must be defined linking the
reservoir-id’s to reservoir characteristics:
• ResTargetFullFrac.tbl - Target fraction full (of max storage) for the reservoir: number between 0 and 1
• ResTargetMinFrac.tbl - Target minimum full fraction (of max storage). Number between 01 and 1 < ResTarget-
FullFrac
• ResMaxVolume.tbl - Maximum reservoir storage (above which water is spilled) [m^3]
• ResDemand.tbl - Water demand on the reservoir (all combined) m^3/s
• ResMaxRelease.tbl - Maximum Q that can be released if below spillway [m^3/s]
By default the reservoirs are not included in the model. To include them put the following lines in the .ini file of the
model.
[modelparameters]
# Add this if you want to model reservoirs
ReserVoirLocs=staticmaps/wflow_reservoirlocs.map,staticmap,0.0,0
ResTargetFullFrac=intbl/ResTargetFullFrac.tbl,tbl,0.8,0,staticmaps/wflow_
˓→reservoirlocs.map
ResTargetMinFrac=intbl/ResTargetMinFrac.tbl,tbl,0.4,0,staticmaps/wflow_reservoirlocs.
˓→map
ResMaxVolume=intbl/ResMaxVolume.tbl,tbl,0.0,0,staticmaps/wflow_reservoirlocs.map
ResMaxRelease=intbl/ResMaxRelease.tbl,tbl,1.0,0,staticmaps/wflow_reservoirlocs.map
ResDemand=intbl/ResDemand.tbl,tblmonthlyclim,1.0,0,staticmaps/wflow_reservoirlocs.map
In the above example most values are fixed thought the year, only the demand is given per month of the year.
Subcatchment flow
Normally the the kinematic wave is continuous throughout the model. By using the the SubCatchFlowOnly entry in
the model section of the ini file all flow is at the subcatchment only and no flow is transferred from one subcatchment
to another. This can be handy when connecting the result of the model to a water allocation model such as Ribasim.
Example:
[model]
SubCatchFlowOnly = 1
Forcing data
The model needs one set of forcing data: IW (water entering the model for each cell in mm). The name of the mapstack
is can be defined in the ini file. By default it is inmaps/IW
See below for an example:
[inputmapstacks]
Inwater = /run_default/outmaps/IW
Inflow = /inmaps/IF
[run]
starttime = 1995-01-31 00:00:00
endtime = 1995-02-28 00:00:00
timestepsecs = 86400
reinit = 0
[model]
modeltype = routing
AnnualDischarge = 2290
Alpha = 120
WIMaxScale = 0.8
Tslice = 1
UpdMaxDist = 300000.0
reinit = 1
fewsrun = 0
OverWriteInit = 0
updating = 0
updateFile = no_set
sCatch = 0
intbl = intbl
timestepsecs = 86400
MaxUpdMult = 1.3
MinUpdMult = 0.7
UpFrac = 0.8
SubCatchFlowOnly = 0
wflow_subcatch = staticmaps/wflow_subcatch.map
wflow_dem = staticmaps/wflow_dem.map
wflow_ldd = staticmaps/wflow_ldd.map
wflow_river = staticmaps/wflow_river.map
(continues on next page)
[framework]
outputformat = 1
debug = 0
netcdfinput = None
netcdfoutput = None
netcdfstaticoutput = None
netcdfstaticinput = None
EPSG = EPSG:4326
[layout]
sizeinmetres = 0
[outputmaps]
self.SurfaceRunoff = _run
self.Qfloodplain = _qfp
self.Qchannel = _qch
self.Qbankfull = _qbnk
self.WaterLevelFP = _levfp
self.WaterLevelCH = _levch
self.InwaterMM = _IW
self.floodcells = fcel
self.Qtot = QQQ
self.Pch = ch
self.Pfp = fp
self.Alpha = al
self.AlphaCh = alch
self.AlphaFP = alfp
self.Ncombined = nc
self.MassBalKinWave = wat
[outputcsv_0]
samplemap = None
[outputtss_0]
samplemap = None
A description of the implementation of the kinematic wave is given on the pcraster website at http://pcraster.geo.uu.
nl/pcraster/4.0.2/doc/manual/op_kinematic.html
In addition to the settings in the ini file you need to give the model additional maps or lookuptables in the staticmaps
or intbl directories:
staticmaps
wflow_subcatch.map Map of the subcatchment in the area. Usually shared with the hydrological model
wflow_dem.map The digital elevation model. Usually shared with the hydrological model
wflow_ldd.map The D8 local drainage network.
wflow_river.map Definition of the river cells.
wflow_riverlength.map Optional map that defines the actual legth of the river in each cell.
wflow_riverlength_fact.map Optional map that defines a multiplication factor for the river length in
each cell.
wflow_gauges.map Map of river gauges that can be used in outputting timeseries
wflow_inflow.map Optional map of inflow points into the surface water. Limited testing.
wflow_riverwidth.map Optional map of the width of the river for each river cell.
wflow_floodplainwidth.map Optional map of the width of the floodplain for each river cell.
wflow_bankfulldepth.map Optional map of the level at which the river starts to flood and water will
also be conducted over the floodplain.
wflow_floodplaindist.map Optional map that defines the the relation between bankfulldepth and the
floodplaindepth. Default = 0.5
wflow_landuse.map Required map of landuse/land cover. This map is used in the lookup tables to relate
parameters to landuse/landcover. Usually shared with the hydrological model
wflow_soil.map Required map of soil type. Usually shared with the hydrological model
Introduction
An experimental implementation of the full dynamic wave equations has been implemented. The current implemen-
tation is fairly unstable and very slow.
However, in flat of tidal areas and areas that flood the dynamic wave can provide much better results. The plot below
is from the Rio Mamore in Bolivia in the lower partes of the river with extensive wetlands that flood nearly each year.
Dependencies
This module is setup to be run in an existing case and runid of a wflow_sbm or wflow_hbv model. In order for
wflow_wave to run they must have saved discharge and waterlevel for each timesteps. This output will be used as a
forcing for the dynamic wave module. The wflow_wave module will also use the existing ldd and DEM
Configuration
It needs anumber of settings in the ini file. The default name for the file is wflow_wave.ini. it is also possible to insert
this section in the wflow_sbm or wflow_hbv ini file and point to that file.
See below for an example:
[inputmapstacks]
# Name of the mapstack with discharge (output from the hydrological model)
Q = run
# Name of the mapstack with waterlevel (output from the hydrological model)
H = lev
[dynamicwave]
# number of substeps for the dynamic wave with respect to the model timesteps
dynsubsteps=24
(continues on next page)
# Optional river map for the dynamic wave that must be the same size or smaller as
˓→that of the
# kinematic wave
wflow_dynriver = staticmaps/wflow_dynriver.map
# a fixed water level for each non-zero point in the wflow_hboun map
# level > 0.0 use that level
# level == 0.0 use supplied timeseries (see levelTss)
# level < 0.0 use upstream water level
fixedLevel = 3.0
# if this is set the program will try to keep the volume at the pits at
# a constant value
lowerflowbound = 1
# instead of a fixed level a tss file with levels for each timesteps and each
# non-zero value in the wflow_hboun map
#levelTss=intss/Hboun.tss
Introduction
The wflow_floodmap module can generate flood maps from output of a wflow_sbm|hbv| or wflow_routing model.
At the moment there are two approaches for flood mapping 1. wflow_floodmap.py – this is a regular wflow-type
model, running at the same resolution as the wflow model used to establish flood maps. The benefit is that it produces
dynamic flood maps. The down side is that the results are in the same resolution as the original model. The method
is also not volume conservative as it only does a planar inundation and bound to lead to large overestimations in very
flat areas.
2. wflow_flood.py (see Scripts folder). This is a postprocessor to results of a wflow model and transforms low-
resolution model results into a high-resolution flood map using a (possibly much) higher resolution terrain dataset as
input. We recommend to retrieve high resolution terrain from the Bare-Earth SRTM dataset by Bristol University. See
https://data.bris.ac.uk/data/dataset/10tv0p32gizt01nh9edcjzd6wa
Method
PM
Configuration
PM
Performs a planar volume spreading on outputs of a wflow_sbm|hbv|routing model run. The module can be used to
post-process model outputs into a flood map that has a (much) higher resolution than the model resolution.
The routine aggregates flooded water volumes occurring across all river pixels across a user-defined strahler order
basin scale (typically a quite small subcatchment) and then spreads this volume over a high resolution terrain model.
To ensure that the flood volume is not spread in a rice-field kind of way (first filling the lowest cell in the occurring
subbasin), the terrain data is first normalised to a Height-Above-Nearest-Drain (HAND) map of the associated typically
flooding rivers (to be provided by user through a catchment order) and flooding is estimated from this HAND map.
A sequential flood mapping is performed starting from the user defined Strahler order basin scale to the highest
Strahler orders. A HAND map is derived for each river order starting from the lowest order. These maps are then
used sequentially to spread flood volumes over the high resolution terrain model starting from the lowest catchment
order provided by the user (using the corresponding HAND map) to the highest stream order. Backwater effects from
flooding of higher orders catchments to lower order catchments are taken into account by taking the maximum flood
level of both.
Preferrably a user should use from the outputs of a wflow_routing model because then the user can use the floodplain
water level only (usually saved in a variable name levfp). If estimates from a HBV or SBM (or other wflow) model
are used we recommend that the user also provides a “bank-full” water level in the command line arguments. If not
provided, wflow_flood will also spread water volumes occuring within the banks of the river, probably leading to an
overestimation of flooding.
Ini-file settings
The module uses an ini file and a number of command line arguments to run. The ini-file contains inputs that are
typically the same across a number of runs with the module for a given study area (e.g. the used DEM, LDD, and
some run parameters). For instance, a user can prepare flood maps from different flood events computed with one
WFLOW model, using the same .ini file for each event.
The .ini file sections are treated below:
[HighResMaps]
dem_file =SRTM 90m merged/BEST90m_WGS_UTM42N.tif
ldd_file = SRTM 90m merged/LDD/ldd_SRTM0090m_WGS_UTM42N.map
stream_file = Processed DEMs/SRTM 90m merged/stream.map
[wflowResMaps]
riv_length_fact_file = floodhazardsimulations/Stepf_output/river_length_fact.map
riv_width_file = floodhazardsimulations/Stepf_output/wflow_floodplainwidth.map
ldd_wflow = floodhazardsimulations/Stepf_output/wflow_ldd.map
[file_settings]
latlon = 0
file_format = 0
The dem_file contains a file with the high-res terrain data. It MUST be in .tif format. This is because .tif files can
contain projection information. At the moment the .tif file must have the same projection as the WFLOW model (can
be WGS84, but also any local projection in meters), but we intend to also facilitate projections in the future.
The ldd_file contains the ldd, derived from the dem_file (PCRaster format)
The stream_file contains a stream order file (made with the PCRaster stream order file) derived from the LDD in
ldd_file.
riv_length_fact_file and riv_width_file contain the dimensions of the channels within the WFLOW pixels (unit meters)
and are therefore in the resolution of the WFLOW model. The riv_length_fact_file is used to derive a riv_length by
multiplying the LDD length from cell to cell within the LDD network with the wflow_riverlength_fact.map map,
typically located in the staticmaps folder of the used WFLOW model. The width map is also in meters, and should
contain the flood plain width in case the wflow_routing model is used (typical name is wflow_floodplainwidth.map).
If a HBV or SBM model is used, you should use the river width map instead (typical name wflow_riverwidth.map).
ldd_wflow is the ldd, derived at wflow resolution (typically wflow_ldd.map)
If latlon is 0, the cell-size is given in meters (the default)
If file_format is set to 0, the flood map is expected to be given in netCDF format (in the command line after -F) if set
to 1, format is expected to be PCRaster format
[metadata_global]
source=WFLOW model XXX
institution=Deltares
title=fluvial flood hazard from a wflow model
references=http://www.deltares.nl/
Conventions=CF-1.6
project=Afhanistan multi-peril country wide risk assessment
When very large domains are processed, the complete rasters will not fit into memory. In this case, the routine will
break the domain into several tiles and process these separately. The x_tile and y_tile parameters are used to set the
tile size. If you are confident that the whole domain will fit into memory (typically when the size is smaller than about
5,000 x 5,000 rows and columns) then just enter a number larger than the total amount of rows and columns. The
x_overlap and y_overlap parameters should be large enough to prevent edge effects at the edges of each tile where
averaging subbasins are cut off from the edge. Slightly larger tiles (defined by the overlap) are therefore processed and
the edges are consequently cut off after processing one tile to get a seamless product.
Some trial and error may be required to yield the right tile sizes and overlaps.
[inundation]
iterations=20
initial_level=32
The inundation section contains a number of settings for the flood fill algorithm. The number of iterations can be
changed, we recommend to set it to 20 for an accurate results. The initial_level is the largest water level that can occur
during flooding. Make sure it is set to a level (much) higher than anticipated to occur but not to a value close to infinity.
If you set it orders too high, the solution will not converge to a reasonable estimate.
When wflow_flood.py is run with the -h argument, you will receive the following feedback:
python wflow_flood.py -h
Usage: wflow_flood.py [options]
Options:
-h, --help show this help message and exit
-q, --quiet do not print status messages to stdout
-i INIFILE, --ini=INIFILE
ini configuration file
-f FLOOD_MAP, --flood_map=FLOOD_MAP
Flood map file (NetCDF point time series file
-v FLOOD_VARIABLE, --flood_variable=FLOOD_VARIABLE
variable name of flood water level
-b BANKFULL_MAP, --bankfull_map=BANKFULL_MAP
Map containing bank full level (is subtracted from
flood map, in NetCDF)
-c CATCHMENT_STRAHLER, --catchment=CATCHMENT_STRAHLER
Smallest Strahler order threshold over which flooding
may occur
-m MAX_CATCHMENT_STRAHLER, --max_catchment=MAX_CATCHMENT_STRAHLER
Largest Strahler order over which flooding may occur
-d DEST_PATH, --destination=DEST_PATH
Destination path
-H HAND_FILE_PREFIX, --hand_file=HAND_FILE_PREFIX
(continues on next page)
Further explanation:
-i the .ini file described in the previous section
-f The NetCDF output time series or a GeoTIFF containing the flood
event to be downscaled. In case of NetCDF, this is a typical NetCDF
output file from a WFLOW model. Alternatively, you can provide
a GeoTIFF file that contains the exact flood depths for a given time
step user defined statistic (for example the maximum value across all
time steps)
-v Variable within the aforementioned file that contains the depth within
the flood plain (typical levfp)
-b Similar file as -f but providing the bank full water level. Can e pro-
vided in case you know that a certain water depth is blocked, or
remains within banks. In cae a NetCDF is provided, the maximum
values are used, alternatively, you can provide a GeoTIFF.
-c starting point of catchment strahler order over which flood volumes
are averaged, before spreading. The Height-Above-Nearest-Drain
maps are derived from this Strahler order on (until max Strahler or-
der). NB: This is the strahler order of the high resolution stream
order map
-m maximum strahler order over which flooding may occur (default
value is the highest order in high res stream map)
-d path where the file is stored
-H HAND file prefix. As interim product, the module produces HAND
files. This is a very time consuming process and therefore the user
can also supply previously generated HAND files here (GeoTIFF
format) The names of the HAND files should be constructed as
follows: hand_prefix_{:02d}.format{hand_strahler}, so for exam-
ple hand_prefix_03 for HAND map with minimum Strahler order
3. (in this case -H hand_prefix should be given) Maps should be
made for Strahler orders from -c to -m (or maximum strahler order
in the stream map)
-n allow for negative HAND maps - if this option is set to 1, the user
allows the HAND maps to become negative. This can be useful when
there are natural embankments which result in a lower elevation than
the river bed. However, this option leads to artifacts when the used
SRTM is not corrected for elevation and when the river shapefile is
not entirely correct (for example if the burned in river is following
an old meander. Therefore the user must be very confident about
the used data sources (river shape file and digital elevation model
corrected for vegetation) when this option is set to 1!
hand_contour_inun.log log file of the module, contains info and error messages
inun_<-f>_catch_<-c>.tif resulting inundation map (GeoTIFF)
<dem_file>_hand_strahler_<-c>.tif HAND file based upon strahler order given with -c (only without -H
Introduction
The processes and fate of many particles and pollutants impacting water quality at the catchment level are intricately
linked to the processes governing sediment dynamics. Both nutrients such as phosphorus, carbon or other pollutants
such as metals are influenced by sediment properties in processes such as mobilization, flocculation or deposition. To
better assert and model water quality in inland systems, a better comprehension and modelling of sediment sources
and fate in the river is needed at a spatial and time scale relevant to such issues.
The wflow_sediment model was developed to answer such issues. It is a distributed physics-based model, based on
the distributed hydrologic wflow_sbm model. It is able to simulate both land and in-stream processes, and relies on
available global datasets, parameter estimation and small calibration effort.
In order to model the exports of terrestrial sediment to the coast through the Land Ocean Aquatic Continuum or LOAC
(inland waters network such as streams, lakes. . . ), two different modelling parts were considered (see Figure below).
The first part is the modelling and estimation of soil loss and sediment yield to the river system by land erosion, called
the soil loss model. The second part is the transport and processes of the sediment in the river system, called the river
model. The two models together constitute the wflow_sediment model.
Method
The wflow_sediment model was developed using the same framework and tool as the wflow_sbm hydrologic model.
It uses the results from the hydrology to then deduce soil erosion and delivery to the river system, in what is called the
soil loss part of the model. It finally follows the fate and transport of the incoming sediments in the stream network,
in what is called the river part of the model. To keep the consistency with wflow_sbm, the model is also developed in
Python using PCRaster functions and should use the same datasets.
The first process to consider in sediment dynamics is the generation of sediments by land erosion. The main processes
behind soil loss are rainfall erosion and overland flow erosion. In order to model such processes at a fine time and
space scale, physics-based models such as ANSWERS and EUROSEM were chosen here.
Rainfall erosion
In wflow_sediment, rainfall erosion can both be modelled using EUROSEM or ANSWERS equation. The main
difference between the models is that EUROSEM uses a more physics-based approach using the kinetic energy of the
rain drops impacting the soil (Morgan et al, 1998), while ANSWERS is more empirical and uses parameters from the
USLE model (Beasley et al, 1991).
In EUROSEM, rainfall erosion is modelled according to rainfall intensity and its kinetic energy while it reaches the
soil according to equations developed by Brandt (1990). As the intensity of the rain kinetic energy depends on the
length of the fall, rainfall intercepted by vegetation will then be reduced compared to direct throughfall. The kinetic
energy of direct throughfall is estimated by (Morgan et al, 1998):
where 𝐾𝐸𝑑𝑖𝑟𝑒𝑐𝑡 is kinetic energy of direct throughfall (J m−2 mm−1 ) and 𝑅𝑖 is rainfall intensity (mm h−1 ). If the
rainfall is intercepted by vegetation and falls as leaf drainage, its kinetic energy is then reduced according to (Brandt,
1990):
𝐷𝑅 = 𝑘 * 𝐾𝐸 * 𝑒−𝜙ℎ
where 𝑘 is an index of the detachability of the soil (g 𝐽 −1 ), 𝐾𝐸 is the total rainfall kinetic energy (J m−2 ), ℎ is
the surface runoff depth on the soil (m) and 𝜙 is an exponent varying between 0.9 and 3.1 used to reduce rainfall
impact if the soil is already covered by water. As a simplification, Torri (1987) has shown that a value of 2.0 for 𝜙 is
representative enough for a wide range of soil conditions. The detachability of the soil 𝑘 depends on the soil texture
(proportion of clay, silt and sand content) and correponding values are defined in EUROSEM user guide (Morgan et
al, 1998). As a simplification, in wflow_sediment, the mean value of the detachability shown in the table below are
used. Soil texture is derived from the topsoil clay and silt content from SoilGrids (Hengl et al, 2017).
Table: Mean detachability of soil depending on its texture (Morgan et al, 1998).
Rainfall erosion is handled differently in ANSWERS. There, the impacts of vegetation and soil properties are handled
through the USLE coefficients in the equation (Beasley et al, 1991):
where 𝐷𝑅 is the soil detachment by rainfall (here in kg min−1 ), 𝐶𝑈 𝑆𝐿𝐸 is the soil cover-management factor from the
USLE equation, 𝐾𝑈 𝑆𝐿𝐸 is the soil erodibility factor from the USLE equation, 𝐴𝑖 is the area of the cell (m 2 ) and 𝑅𝑖
is the rainfall intensity (here in mm min−1 ). In wflow_sediment, there are several methods available to estimate the
𝐶 and 𝐾 factors from the USLE. They can come from user input maps, for example maps resulting from Panagos &
al.’s recent studies for Europe (Panagos et al, 2015) (Ballabio et al, 2016). To get an estimate of the 𝐶 factor globally,
the other method is to estimate 𝐶 values for the different land use type in GlobCover. These values, summed up in
the table below, come from a literature study including Panagos & al.’s review (2015), Gericke & al. (2015), Mansoor
& al. (2013), Chadli & al. (2016), de Vente & al. (2009), Borrelli & al. (2014), Yang & al. (2003) and Bosco & al.
(2015).
The other methods to estimate the USLE 𝐾 factor are to use either topsoil composition or topsoil geometric mean di-
ameter. 𝐾 estimation from topsoil composition is estimated with the equation developed in the EPIC model (Williams
et al, 1983):
{︂ [︂ ]︂}︂ (︂ )︂0.3
(1 − 𝑆𝐼𝐿) 𝑆𝐼𝐿
𝐾𝑈 𝑆𝐿𝐸 =0.2 + 0.3𝑒𝑥𝑝 −0.0256𝑆𝐴𝑁
100 𝐶𝐿𝐴 + 𝑆𝐼𝐿
(︂ )︂ (︂ )︂
0.25𝑂𝐶 0.75𝑆𝑁
* 1− * 1−
𝑂𝐶 + 𝑒𝑥𝑝(3.72 − 2.95𝑂𝐶) 𝑆𝑁 + 𝑒𝑥𝑝(−5.51 + 22.9𝑆𝑁 )
where 𝐷𝑔 is the soil geometric mean diameter (mm) estimated from topsoil clay, silt, sand fraction.
Table: Estimation of USLE C factor per Globcover land use type
Overland flow (or surface runoff) erosion is induced by the strength of the shear stress of the surface water on the soil.
As in rainfall erosion, the effect of the flow shear stress can be reduced by the soil vegetation or by the soil properties.
In wflow_sediment, soil detachment by overland flow is modelled as in ANSWERS with (Beasley et al, 1991):
where 𝐷𝐹 is soil detachment by flow (kg min−1 ), 𝐶𝑈 𝑆𝐿𝐸 and 𝐾𝑈 𝑆𝐿𝐸 are the USLE cover and soil erodibility factors,
𝐴𝑖 is the cell area (m2 ), 𝑆 is the slope gradient and 𝑞 is the overland flow rate per unit width (m2 min−1 ). The USLE
𝐶 and 𝐾 factors can be estimated with the same methods as for rainfall erosion and here the slope gradient is obtained
from the sinus rather than the tangent of the slope angle.
Once the amount of soil detached by both rainfall and overland flow has been estimated, it has then to be routed
and delivered to the river network. Inland routing in sediment models is usually done by comparing the amount of
detached sediment with the transport capacity of the flow, which is the maximum amount of sediment than the flow
can carry downslope. There are several existing formulas available in the literature. For a wide range of slope and for
overland flow, the Govers equation (1990) seems the most appropriate choice (Hessel et al, 2007). However, as the
wflow_sediment model was developed to be linked to water quality issues, the Yalin transport equation was chosen as
it can handle particle differentiation (Govers equation can still be used if wflow_sediment is used to only model inland
processes with no particle differentiation). For land cells, wflow_sediment assumes that erosion can mobilize 5 classes
of sediment:
• Clay (mean diameter of 2 𝜇 m)
• Silt (mean diameter of 10 𝜇 m)
• Sand (mean diameter of 200 𝜇 m)
• Small aggregates (mean diameter of 30 𝜇 m)
• Large aggregates (mean diameter of 500 𝜇 m).
To deduce the amount of small and large aggregates from topsoil clay, silt and sand contents, the following equations
from the SWAT model are used (Neitsch et al, 2011):
𝑃 𝑆𝐴 = 𝑆𝐴𝑁 * (1 − 𝐶𝐿𝐴)2.4
𝑃 𝑆𝐼 = 0.13𝑆𝐼𝐿
𝑃 𝐶𝐿 = 0.20𝐶𝐿𝐴
𝐿𝐴𝐺 = 1 − 𝑃 𝑆𝐴 − 𝑃 𝑆𝐼 − 𝑃 𝐶𝐿 − 𝑆𝐴𝐺
where 𝐶𝐿𝐴, 𝑆𝐼𝐿 and 𝑆𝐴𝑁 are the primary clay, silt, sand fractions of the topsoil and 𝑃 𝐶𝐿, 𝑃 𝑆𝐼, 𝑃 𝑆𝐴, 𝑆𝐴𝐺 and
𝐿𝐴𝐺 are the clay, silt, sand, small and large aggregates fractions of the detached sediment respectively. The transport
capacity of the flow using Yalin’s equation with particle differentiation, developed by Foster (1982), is:
𝑇 𝐶𝑖 = (𝑃𝑒 )𝑖 * (𝑆𝑔 )𝑖 * 𝜌𝑤 * 𝑔 * 𝑑𝑖 * 𝑉*
where 𝑇 𝐶𝑖 is the transport capacity of the flow for the particle class i, (𝑃𝑒 )𝑖 is the effective number of particles of
class i, (𝑆𝑔 )𝑖 is the specific gravity for the particle class i (kg m−3 ), 𝜌𝑤 is the mass density of the fluid (kg m−3 ), 𝑔 is
the acceleration due to gravity (m s−2 ), 𝑑𝑖 is the diameter of the particle of class i (m) and 𝑉* = (𝑔 * 𝑅 * 𝑆)0.5 is the
shear velocity of the flow (m s−1 ) with 𝑆 the slope gradient and 𝑅 the hydraulic radius of the flow (m). The detached
sediment are then routed downslope until the river network using the accucapacityflux, accupacitystate functions from
the PCRaster Python framework depending on the transport capacity from Yalin.
Finally, the different processes happening for a land cell in the soil loss part of wflow_sediment are summarized in the
figure below:
River part
As wflow_sediment was developed for applications across Europe, it must be able to simulate sediment dynamics both
for small and large catchments. Which is why, for large catchments, it needed to model more precisely processes
happening in the stream network. Thus once sediments coming from land erosion reach a river cell in the model,
processes and equations change. There are not so many models available to model in-stream sediment dynamics with
only hydrology. In the end, the more physics-based approach of the SWAT model was chosen as it requires little or
no calibration and it can separate both the suspended from the bed load (Neitsch et al, 2011). As in SWAT, in the river
part of wflow_sediment, 5 particles class are modelled: clay, silt, sand, small and large aggregates and gravel. Small
and large aggregates are assumed to only come from land erosion, gravel only from river erosion, while clay, silt and
sand can both come from either land or river erosion. In the river, suspended sediment load is assumed to be the sum
of clay and silt and the bed sediment load is assumed to be composed of sand, gravel, small and large aggregates.
The first part of the river model assesses how much detached sediment are in the river cell at the beginning of the
timestep t. Sources of detached sediment are sediments coming from land erosion, estimated with the soil loss part of
wflow_sediment model, the sediment coming from upstream river cells and the detached sediment that were left in the
cell at the end of the previous timestep (t-1):
Sediment coming from upstream river cells is estimated using the PCRaster upstream function and the local drainage
direction map to spot the upstream river cells.
Once the amount of sediment inputs at the beginning of the timestep is known, the model then estimates transport,
and river erosion if there is a deficit of sediments. Transport in the river system is estimated via a transport capacity
formula. There are several transport capacity formulas available in wflow_sediment, some requiring calibration and
some not. Choosing a transport capacity equation depends on the river characteristics (some equation are more suited
for narrow or wider rivers), and on the reliability of the required river parameters (such as slope, width or mean particle
diameter of the river channel). Available transport capacity equations are:
• Simplified Bagnold: originally more valid for intermediate to large rivers, this simplified version of the Bagnold
equation relates sediment transport to flow velocity with two simple calibration parameters (Neitsch et al, 2011):
(︂ )︂𝑠𝑝𝑒𝑥𝑝
𝑝𝑟𝑓 * 𝑄
𝐶𝑚𝑎𝑥 = 𝑐𝑠𝑝 *
ℎ*𝑊
where 𝐶𝑚𝑎𝑥 is the sediment concentration (ton m−3 or kg/L), 𝑄 is the surface runoff in the river cell
(m:math:^{3}/s), ℎ is the river water level (m), 𝑊 is the river width (m) and 𝑐𝑠𝑝 , 𝑝𝑟𝑓 and 𝑠𝑝𝑒𝑥𝑝 are calibra-
tion parameters. The 𝑝𝑟𝑓 coefficient is usually used to deduce the peak velocity of the flow, but for simpli-
fication in wflow_sediment, the equation was simplified to only get two parameters to calibrate: 𝑠𝑝𝑒𝑥𝑝 and
𝑐𝐵𝑎𝑔𝑛𝑜𝑙𝑑 = 𝑐𝑠𝑝 * 𝑝𝑟𝑓 𝑠𝑝𝑒𝑥𝑝 . The coefficient 𝑠𝑝𝑒𝑥𝑝 usually varies between 1 and 2 while 𝑝𝑟𝑓 and 𝑐𝑠𝑝 have a
wider range of variation. The table below summarizes ranges and values of the three Bagnold coefficients used
by other studies:
• Engelund and Hansen: not present in SWAT but used in many models such as Delft3D-WAQ, Engelund and
Hansen calculates the total sediment load as (Engelund and Hansen, 1967):
⎛ ⎞
(︂ )︂ ⎜ ⎟
𝜌𝑠 ⎜ 𝑢*𝑆 ⎟ 1/2
𝐶𝑤 = 0.05 ⎜ √︃ ⎟𝜃
𝜌𝑠 − 𝜌 ⎜ (︂
⎝ 𝜌𝑠
)︂ ⎟
⎠
* 𝑔 * 𝐷50
𝜌𝑠 − 𝜌
where 𝐶𝑤 is the sediment concentration by weight, 𝜌 and 𝜌𝑠 are the fluid and sediment density (here equal to
1000 and 2650 g m−3 ), 𝑢 is the water mean velocity (m/s), 𝑆 is the river slope, 𝑔 is the acceleration due to
gravity, 𝐷50 is the river mean diameter (m) and 𝜃 is the Shields parameter.
• Kodatie: Kodatie (1999) developped the power relationships from Posada (1995) using field data and linear
optimization so that they would be applicable for a wider range of riverbed sediment size. The resulting equation,
for a rectangular channel, is (Neitsch et al, 2011):
𝑎 * 𝑢𝑏 * ℎ𝑐 * 𝑆 𝑑
(︂ )︂
𝐶𝑚𝑎𝑥 = *𝑊
𝑉𝑖𝑛
where 𝑉𝑖𝑛 in the volume of water entering the river cell during the timestep (m:math:^{3}) and 𝑎, 𝑏, 𝑐 and 𝑑 are
coefficients depending on the riverbed sediment size. Values of these coefficients are summarized in the table
below:
• Yang: Yang (1996) developed a set of two equations giving transport of sediments for sand-bed or gravel-bed
rivers. The sand equation (𝐷50 < 2𝑚𝑚) is:
𝜔𝑠,50 𝐷50 𝑢*
𝑙𝑜𝑔 (𝐶𝑝𝑝𝑚 ) = 5.435 − 0.286𝑙𝑜𝑔 − 0.457𝑙𝑜𝑔
𝜈 𝜔𝑠,50
(︂ )︂ (︂ )︂
𝜔𝑠,50 𝐷50 𝑢* 𝑢𝑆 𝑢𝑐𝑟 𝑆
+ 1.799 − 0.409𝑙𝑜𝑔 − 0.314𝑙𝑜𝑔 𝑙𝑜𝑔 −
𝜈 𝜔𝑠,50 𝜔𝑠,50 𝜔𝑠,50
And the gravel equation (2 ≤ 𝐷50 < 10𝑚𝑚) is:
𝜔𝑠,50 𝐷50 𝑢*
𝑙𝑜𝑔 (𝐶𝑝𝑝𝑚 ) = 6.681 − 0.633𝑙𝑜𝑔 − 4.816𝑙𝑜𝑔
𝜈 𝜔𝑠,50
(︂ )︂ (︂ )︂
𝜔𝑠,50 𝐷50 𝑢* 𝑢𝑆 𝑢𝑐𝑟 𝑆
+ 2.784 − 0.305𝑙𝑜𝑔 − 0.282𝑙𝑜𝑔 𝑙𝑜𝑔 −
𝜈 𝜔𝑠,50 𝜔𝑠,50 𝜔𝑠,50
where 𝐶𝑝𝑝𝑚 is sediment concentration in parts per million by weight, 𝜔𝑠,50 is the settling velocity of a particle
with the median riverbed√diameter estimated with Stokes (m/s), 𝜈 is the kinematic viscosity of the fluid (m2 /s),
𝑢* is the shear velocity ( 𝑔𝑅𝐻 𝑆 in m/s with 𝑅𝐻 the hydraulic radius of the river) and 𝑢𝑐𝑟 is the critical velocity
(m/s, equation can be found in Hessel, 2007).
• Molinas and Wu: The Molinas and Wu (2001) transport equation was developed for large sand-bed rivers based
on the universal stream power 𝜓. The corresponding equation is (Neitsch et al, 2011):
√
1430 * (0.86 + 𝜓) * 𝜓 1.5
𝐶𝑤 = * 10−6
0.016 + 𝜓
where 𝜓 is the universal stream power given by:
𝜓3
𝜓 = (︂ )︂ [︂ (︂ )︂]︂2
𝜌𝑠 ℎ
− 1 * 𝑔 * ℎ * 𝜔𝑠,50 * 𝑙𝑜𝑔10
𝜌 𝐷50
Once the maximum concentration 𝐶𝑚𝑎𝑥 is established with one of the above transport formula, the model then deter-
mines if there is erosion of the river bed and bank. In order to do that, the difference 𝑠𝑒𝑑𝑒𝑥 between the maximum
amount of sediment estimated with transport (𝑠𝑒𝑑𝑚𝑎𝑥 = 𝐶𝑚𝑎𝑥 * 𝑉𝑖𝑛 ) and the sediment inputs to the river cell (𝑠𝑒𝑑𝑖𝑛
calculated above) is calculated. If too much sediment is coming in and 𝑠𝑒𝑑𝑒𝑥 is negative, then there is no river bed and
bank erosion. And if the river has not reach its maximum transport capacity, then erosion of the river happens.
First, the sediments stored in the cell from deposition in previous timesteps 𝑠𝑒𝑑𝑠𝑡𝑜𝑟 are eroded from clay to gravel. If
this amount is not enough to cover 𝑠𝑒𝑑𝑒𝑥 , then erosion of the local river bed and bank material starts.
Instead of just setting river erosion amount to just cover the remaining difference 𝑠𝑒𝑑𝑒𝑥𝑒𝑓 𝑓 between 𝑠𝑒𝑑𝑒𝑥 and 𝑠𝑒𝑑𝑠𝑡𝑜𝑟 ,
actual erosion potential is adjusted using river characteristics and is separated between the bed and bank of the river
using the physics-based approach of Knight (1984).
The bed and bank of the river are supposed to only be able to erode a maximum amount of their material 𝐸𝑅,𝑏𝑒𝑑 for
the bed and 𝐸𝑅,𝑏𝑎𝑛𝑘 for the river bank. For a rectangular channel, assuming it is meandering and thus only one bank
is prone to erosion, they are calculated from the equations(Neitsch et al, 2011):
𝐸𝑅,𝑏𝑒𝑑 = 𝑘𝑑,𝑏𝑒𝑑 * (𝜏𝑒,𝑏𝑒𝑑 − 𝜏𝑐𝑟,𝑏𝑒𝑑 ) * 10−6 * 𝐿 * 𝑊 * 𝜌𝑏,𝑏𝑒𝑑 * ∆𝑡
Normally erodibilities are evaluated using jet test in the field and there are several reviews and some adjustments
possible to this equation (Simon et al, 2011). However, to avoid too heavy calibration and for the scale considered,
this equation is supposed to be efficient enough. The critical shear stress 𝜏𝑐𝑟 is evaluated differently for the bed and
bank. For the bed, the most common formula from Shields initiation of movement is used. For the bank, a more recent
approach from Julian and Torres (2006) is used :
where 𝑆𝐶 is the percent clay and silt content of the river bank and 𝐶𝑐ℎ is a coefficient taking into account the positive
impact of vegetation on erosion reduction. This coefficient is then dependent on the land use and classical values are
shown in the table below. These values where then adapted for use with the GlobCover land use map. Percent of clay
and silt (along with sand and gravel) for the channel is estimated from the river median particle diameter assuming the
same values as SWAT shown in the table below. Median particle diameter is here estimated depending on the Strahler
river order. The higher the order, the smaller the diameter is. As the median diameter is only used in wflow_sediment
for the estimation of the river bed/bank sediment composition, this supposition should be enough. Actual refined data
or calibration may however be needed if the median diameter is also required for the transport formula. In a similar
way, the bulk densities of river bed and bank are also just assumed to be of respectively 1.5 and 1.4 g/cm3 .
Table : Composition of the river bed/bank depending on the median diameter (Neitsch et al, 2011)
Then, the repartition of the flow shear stress is refined into the effective shear stress and the bed and bank of the river
using the equations developed by Knight (1984) for a rectangular channel:
(︂ )︂ (︂ )︂
𝑆𝐹𝑏𝑎𝑛𝑘 2ℎ
𝜏𝑒,𝑏𝑒𝑑 = 𝜌𝑔𝑅𝐻 𝑆 * 1 − * 1+
100 𝑊
(︂ )︂
𝑊
𝜏𝑒,𝑏𝑎𝑛𝑘 = 𝜌𝑔𝑅𝐻 𝑆 * (𝑆𝐹𝑏𝑎𝑛𝑘 ) * 1 +
2ℎ
River deposition
As sediments have a higher density than water, moving sediments in water can be deposited in the river bed. The
deposition process depends on the mass of the sediment, but also on flow characteristics such as velocity. In
wflow_sediment, as in SWAT, deposition is modelled with Einstein’s equation (Neitsch et al, 2011):
(︂ )︂
1
𝑃𝑑𝑒𝑝 = 1 − 𝑥 * 100
𝑒
where 𝑃𝑑𝑒𝑝 is the percentage of sediments that is deposited on the river bed and x is a parameter calculated with:
1.055 * 𝐿 * 𝜔𝑠
𝑥=
𝑢*ℎ
where 𝐿 and ℎ are channel length and water height (m), 𝜔𝑠 is the particle settling velocity calculated with Stokes
formula (m/s) and 𝑢 is the mean flow velocity (m/s). The calculated percentage is then subtracted from the amount
of sediment input and eroded river sediment for each particle size class (𝑠𝑒𝑑𝑑𝑒𝑝 = 𝑃𝑑𝑒𝑝 /100 * (𝑠𝑒𝑑𝑖𝑛 + 𝑠𝑒𝑑𝑒𝑟𝑜𝑑 )).
Resulting deposited sediment are then stored in the river bed and can be re-mobilized in future time steps by erosion.
Finally after estimating inputs, deposition and erosion with the transport capacity of the flow, the amount of sediment
actually leaving the river cell to go downstream is estimated using:
𝑉𝑜𝑢𝑡
𝑠𝑒𝑑𝑜𝑢𝑡 = (𝑠𝑒𝑑𝑖𝑛 + 𝑠𝑒𝑑𝑒𝑟𝑜𝑑 − 𝑠𝑒𝑑𝑑𝑒𝑝 ) *
𝑉
where 𝑠𝑒𝑑𝑜𝑢𝑡 is the amount of sediment leaving the river cell (tons), 𝑠𝑒𝑑𝑖𝑛 is the amount of sediment coming into the
river cell (storage from previous timestep, land erosion and sediment flux from upstream river cells in tons), 𝑠𝑒𝑑𝑒𝑟𝑜𝑑
is the amount of sediment coming from river erosion (tons), 𝑠𝑒𝑑𝑑𝑒𝑝 is the amount of deposited sediments (tons), 𝑉𝑜𝑢𝑡
is the volume of water leaving the river cell (surface runoff 𝑄 times timestep ∆𝑡 in m3 ) and 𝑉 is the total volume of
water in the river cell (𝑉𝑜𝑢𝑡 plus storage ℎ * 𝑊 * 𝐿 in m3 ).
A mass balance is then used to calculate the amount of sediment remaining in the cell at the end of the timestep
(𝑠𝑒𝑑𝑟𝑖𝑣 )𝑡 :
(𝑠𝑒𝑑𝑟𝑖𝑣 )𝑡 = (𝑠𝑒𝑑𝑟𝑖𝑣 )𝑡−1 + (𝑠𝑒𝑑𝑙𝑎𝑛𝑑 )𝑡 + 𝑢𝑝𝑠𝑡𝑟𝑒𝑎𝑚 [(𝑠𝑒𝑑𝑜𝑢𝑡 )𝑡−1 ]
+(𝑠𝑒𝑑𝑒𝑟𝑜𝑑 )𝑡 − (𝑠𝑒𝑑𝑑𝑒𝑝 )𝑡 − (𝑠𝑒𝑑𝑜𝑢𝑡 )𝑡
Finally, the different processes happening for a land cell in the river part of wflow_sediment are summarized in the
figure below:
Lake modelling
Apart from land and river, the hydrologic wflow_sbm model also handles lakes and reservoirs modelling. In
wflow_sbm, lakes and large reservoirs are modelled using a 1D bucket model at the cell corresponding to the out-
let. For the other cells belonging to the lake/reservoir which are not the outlet, processes such as precipitation and
evaporation are filtered out and shifted to the outlet cell. wflow_sediment then handles the lakes in the same way. If a
cell belongs to a lake/reservoir and is not the outlet then the model assumes that no erosion/deposition of sediments is
happening and the sediments are only all transported to the lake/reservoir outlet. Once the sediments reach the outlet,
then sediments are deposited in the lake/reservoir according to Camp’s model (1945) (Verstraeten et al, 2000):
𝜔𝑠 𝐴𝑟𝑒𝑠
𝑇𝐸 = = * 𝜔𝑠
𝑢𝑐𝑟,𝑟𝑒𝑠 𝑄𝑜𝑢𝑡,𝑟𝑒𝑠
where 𝑇 𝐸 is the trapping efficiency of the lake/reservoir (or the fraction of particles trapped), 𝜔𝑠 is the particle
velocity from Stokes (m/s), 𝑢𝑐𝑟,𝑟𝑒𝑠 is the reservoir’s critical settling velocity (m/s) which is equal to the reservoir’s
outflow 𝑄𝑜𝑢𝑡,𝑟𝑒𝑠 (m3 /s) divided by the reservoir’s surface area 𝐴𝑟𝑒𝑠 (m2 ).
The wflow_sediment model was developed as part of the wflow hydrologic platform and is therefore another wflow
module, developed in Python, and using the same framework than wflow_sbm. First, the model case is set up and run
normally with wflow_sbm. Then wflow_sediment is run using the outputs of the hydrologic model. As settings for
wflow_sbm are explained in the corresponding part of this documentation, only specific details regarding the run of
wflow_sediment are developed here.
Running wflow_sbm
To model sediment dynamics, the first step is to build a wflow_sbm model and to run it for the catchment considered.
Apart from the usual settings for the wflow_sbm model, additional ones for a run with wflow_sediment are to save the
following variables in the outputmaps section of the wflow_sbm.ini file:
• Precipitation “self.Precipitation” (can also be taken directly from the wflow_sbm forcings)
• Land runoff (overland flow) from the kinematic wave “self.LandRunoff”
• River runoff from the kinematic wave “self.RiverRunoff”
• Land water level in the kinematic wave “self.WaterLevelL”
• River water level in the kinematic wave “self.WaterLevelR”
• Rainfall interception by the vegetation “self.Interception”.
wflow_sediment also needs some static output maps which are saved by default by wflow_sbm. These maps are the
map of the actual width and length of the flow volume (Bw and DCL.map). After the set up, wflow_sbm is run
normally either via a batch file or via the command line.
As wflow_sediment is built in the same way as wflow_sbm, its settings and use are very similar. First, some additional
data must be downloaded. Then the corresponding ini file that summarizes all inputs and outputs of the model run is
completed and the model can finally be run.
Apart from many data, such as landuse, catchment map, ldd map etc, that are already needed for the wflow_sbm run,
wflow_sediment requires some extra additional data which are:
• Map with topsoil percent of clay: this can be download, as for wflow_sbm other soil data, from the SoilGrids
database (Hengl et al, 2017). Values then needs to be resampled and adjusted to the model grid size (for the
global version of wflow by averaging). This data is mandatory for the sediment model to run.
• Map with topsoil percent of silt: this can also be downloaded from SoilGrids and processed in the same way as
the topsoil clay map. This data is mandatory for the sediment model to run.
• Map with topsoil percent of organic carbon: this data can be downloaded from SoilGrids. Units should be in
percent (SoilGrids gives it in per-mille) and adjusted to the model grid cells. This data is only needed if the user
wishes to calculate the USLE K parameter of soil erosion using the EPIC formula.
• Map of vegetation height: this is available globally using the map published by Simard & al (2011). Other
sources can however be used. Units should be in meters. Vegetation height is only needed if the EUROSEM
model is used to calculate rainfall erosion.
As for wflow_sbm, the setting up of the wflow_sediment model is also done via an ini file and its different sections. A
complete example is given in the wflow examples folder. The main sections and options needed are:
• inputmapstacks: Links to the dynamic outputs of the wflow_sbm run either stored as maps in the outmaps
folder of the sbm run or in the netcdf file. Dynamic data needed are Precipitation, SurfaceRunoff, WaterLevel
and Interception.
[inputmapstacks]
# Outputs from wflow_sbm
Precipitation = /inmaps/P
Interception = /inmaps/int
RiverRunoff = /inmaps/runR
LandRunoff = /inmaps/runL
WaterLevelR = /inmaps/levKinR
WaterLevelL = /inmaps/levKinL
• framework: As for wflow_sbm, specifies if the inputs or outputs of the model are in netcdf format or PCRaster
maps. If the results of wflow_sbm are saved in a netcdf file, link to this file is precised in the netcdfinput
argument.
• run: Info on the run parameters of the model. The start time, end time and timesteps of the model are written
in this section. The reinit argument precise if the model should start from cold states (all the states maps of the
model are set to zero if reinit = 1) or from the states maps given in the instate folder of the model (reinit = 0).
• modelparameters: Other parameters used by the model. This section should include the same inputs as the
wflow_sbm.ini file for reservoir / lake modelling and Leaf Area Index data.
• layout: Specifies if the cell size is given in lat-lon (sizeinmetres = 0) or in meters (1). Should be set as in
wflow_sbm.
• outputmaps: As in wflow_sbm, this section is used to choose which dynamic data to save from the
wflow_sediment run. These are:
• summary: Used to save summary maps of wflow_sediment outputs such as yearly average or yearly sum etc. It
works in the same way than for wflow_sbm (see wflow documentation for more details).
• outputcsv and outputtss: Used to save the evolution of wflow_sediment outputs for specific points or areas of
interest in csv or tss format. It works in the same way than for wflow_sbm (see wflow documentation for more
details).
Once all the settings are ready, the wflow_sediment model is run similarly to wflow_sbm via the command line or a
batch file. The minimum command line requires:
• The link to the wflow_sediment script.
• -C option stating the name of the wflow case directory.
• -R option stating the name of the directory of wflow_sediment outputs.
• -c option stating the link to the wflow_sediment ini file.
As in wflow_sbm, the outputs of the wflow_sediment model can both be dynamic netcdf/pcraster maps data, static
data, or dynamic data for points/areas of interest. The main outputs variables are soil loss by rainfall and overland
flow erosion (“self.SedSpl” + “self.SedOv” = “self.soilloss” in ton/timestep/cell), all the arguments from the sediment
References
• K.C. Abbaspour, J. Yang, I. Maximov, R. Siber, K. Bogner, J. Mieleitner, J. Zobrist, and R.Srinivasan. Modelling
hydrology and water quality in the pre-alpine/alpine Thur watershed using SWAT. Journal of Hydrology, 333(2-
4):413-430, 2007. 10.1016/j.jhydrol.2006.09.014
• C. Ballabio, P. Panagos, and L. Monatanarella. Mapping topsoil physical properties at European scale using the
LUCAS database. Geoderma, 261:110-123, 2016. 10.1016/j.geoderma.2015.07.006
• D.B Beasley and L.F Huggins. ANSWERS - Users Manual. Technical report, EPA, 1991.
• P. Borrelli, M. Marker, P. Panagos, and B. Schutt. Modeling soil erosion and river sediment yield
for an intermountain drainage basin of the Central Apennines, Italy. Catena, 114:45-58, 2014.
10.1016/j.catena.2013.10.007
• C. Bosco, D. De Rigo, O. Dewitte, J. Poesen, and P. Panagos. Modelling soil erosion at European scale:
Towards harmonization and reproducibility. Natural Hazards and Earth System Sciences, 15(2):225-245, 2015.
10.5194/nhess-15-225-2015
• C.J Brandt. Simulation of the size distribution and erosivity of raindrops and throughfall drops. Earth Surface
Processes and Landforms, 15(8):687-698, dec 1990.
• K. Chadli. Estimation of soil loss using RUSLE model for Sebou watershed (Morocco). Modeling Earth Sys-
tems and Environment, 2(2):51, 2016. 10.1007/s40808-016-0105-y
• F. Engelund and E. Hansen. A monograph on sediment transport in alluvial streams. Technical University of
Denmark 0stervoldgade 10, Copenhagen K., 1967.
• G R Foster. Modeling the erosion process. Hydrologic modeling of small watersheds, pages 295-380, 1982.
• A. Gericke. Soil loss estimation and empirical relationships for sediment delivery ratios of European river
catchments. International Journal of River Basin Management, 2015. 10.1080/15715124.2014.1003302
• G. Govers. Empirical relationships for the transport capacity of overland flow. IAHS Publication, (January
1990):45-63 ST, 1990.
• G.J Hanson and A Simon. Erodibility of cohesive streambeds in the loess area of the midwestern USA. Hydro-
logical Processes, 15(May 1999):23-38, 2001.
• T. Hengl, J. Mendes De Jesus, G.B.M. Heuvelink, M. Ruiperez Gonzalez, M. Kilibarda, A. Blagotic, W. Shang-
guan, M. N. Wright, X. Geng, B. Bauer- Marschallinger, M.A. Guevara, R. Vargas, R.A. MacMillan, N.H.
Batjes, J.G.B. Leenaars, E. Ribeiro, I. Wheeler, S. Mantel, and B. Kempen. SoilGrids250m: Global gridded soil
information based on machine learning. PLoS ONE, 12(2), 2017. 10.1371/journal.pone.0169748
• R Hessel and V Jetten. Suitability of transport equations in modelling soil erosion for a small Loess Plateau
catchment. Engineering Geology, 91(1):56-71, 2007. 10.1016/j.enggeo.2006.12.013
• J.P Julian, and R. Torres. Hydraulic erosion of cohesive riverbanks. Geomorphology, 76:193-206, 2006.
10.1016/j.geomorph.2005.11.003
Introduction
Wflow_lintul, a raster-based crop growth model for rice, is based on LINTUL3, a point-based model for simulating
nitrogen-limited rice growth (Shibu et al., 2010). LINTUL3 was parameterized and calibrated for rice by Shibu et
al. (2010), based on experimental data from Southeast Asia (Drenth et al., 1994) and drawing on the more complex
rice model ORYZA2000 (Bouman et al., 2001) and the preceding versions LINTUL1 (Spitters, 1990) and LINTUL2
(Spitters and Schapendonk, 1990). In contrast to LINTUL3, wflow_lintul is primarily intended for simulation of rice
production under water-limited conditions, rather than under nitrogen-limited conditions. To that end, it was designed
to function in close cooperation with the spatial hydrological model wflow_sbm, which operates on a watershed-scale.
The LINTUL (Light Interception and Utilization) models were the first deviation from the more complex,
photosynthesis-based models of the “De Wit school” of crop modelling, also called the “Wageningen school” (Bouman
et al., 1996). In the LINTUL models, total dry matter production is calculated in a comparatively simple way, using the
Monteith approach (Monteith, 1969; 1990). In this approach, crop growth is calculated as the product of interception
of (solar) radiation by the canopy and a fixed light-use efficiency (LUE; Russell et al., 1989). This way of estimating
(daily) biomass production in the LINTUL models is reflected in the equation below and may be considered the core
of the wflow_lintul model:
with GTOTAL the overall daily growth rate of the crop (g m−2 d−1 ), self.LUE a constant light use efficiency (g MJ−1 ),
PARINT the daily intercepted photosynthetically active radiation (MJ m−2 d−1 ), TRANRF (-) the ‘transpiration re-
duction factor’, i.e. the reducing impact of water shortage on biomass production.
For regional studies, LINTUL-type models have the advantage that data input requirements are drastically reduced and
model parameterization is facilitated (Bouman et al., 1996). LINTUL was first developed for potential crop growth (i.e.
perfect growing conditions without any water or nutrient shortages and in absence of pests, diseases and adverse soil
conditions) as “LINTUL1” (Spitters, 1990). Later, it was extended to simulate water-limited conditions (“LINTUL2”;
Spitters and Schapendonk, 1990) and nitrogen-limited conditions (“LINTUL3”; Shibu et al., 2010). Under water-
limited conditions, all growth factors except water are assumed non-limiting, i.e. ample nutrient availability, a pest-,
disease- and weed-free environment and no adverse soil conditions. Under nitrogen-limited conditions, only nitrogen
availability may limit crop growth. LINTUL has been successfully applied to different crops such as potato (Spitters
and Schapendonk, 1990), grassland (LINGRA) (Schapendonk et al., 1998), maize (Farré et al., 2000), oilseed rape
(Habekotté, 1997) and rice (Shibu et al., 2010), in potential, water-limited or nitrogen-limited situations.
• Potential and water-limited simulations: Wflow_lintul presently (spring 2018) simulates potential and water-
limited crop growth (the latter in conjunction with wflow_sbm). First preparations to add simulation of nitrogen-
limited rice growth (LINTUL3) have been made in the present version of the model.
• Water balance outsourced to wflow_sbm: The simple water balance, based on Stroosnijder (1982) and Pen-
ning de Vries et al. (1989), which was present in LINTUL2 and LINTUL3, is no longer present in wflow_lintul.
All water balance-related simulation tasks are outsourced to the wflow_sbm model. On the other hand, several
crop growth related tasks in the hydrology model wflow_sbm, such as the simulation of LAI (leaf area index)
are now outsourced to wflow_lintul. In the wflow framework, wflow_lintul and wflow_sbm communicate with
each other via the Basic Model Interface (BMI) implementation for wflow (wflow_bmi).
• Written in PCRaster Python: Whereas the original LINTUL1, LINTUL2 and LINTUL3 models were imple-
mented in the Fortran Simulation Translator (FST) software (Rappoldt and Van Kraalingen, 1996), wflow_lintul
is written in PCRaster Python and fully integrated into the wflow hydrologic modelling framework.
Run options
For runs, i.e. without linking to the hydrology model wflow_sbm, the “WATERLIMITED” option in the
wflow_lintul.ini should be set to “False”. Water limited simulation presently requires the presence of a water bal-
ance/hydrology model (i.e. wflow_sbm).
For running wflow_lintul in conjunction with the hydrological model wflow_sbm, which takes care of all water-
balance related tasks and which is what the model is really intended for, a Python BMI runner module, exchanging
data between wflow_sbm and wflow_lintul, needs to be run.
An example of a coupled wflow_sbm and wflow_lintul model is available in
\wflow\examples\wflow_brantas_sbm_lintul
To run the coupled models:
• activate wflow
• python bmi2runner.py bmi2runner.ini
Wflow_lintul model runs can be configured by editing the values of a number of parameters in the [model] section of
the wflow_lintul.ini file. Wflow_lintul reads the values from wflow_lintul.ini with its parameters function; if a certain
parameter or value is not found in wflow_lintul.ini, a default value (provided for each individual parameter in the
parameters function) is returned. The following nine variables in wflow_lintul.ini have a run control function:
[model]
CropStartDOY = 0
HarvestDAP = 90
WATERLIMITED = True
AutoStartStop = True
RainSumStart_Month = 11
RainSumStart_Day = 1
RainSumReq = 200
Pause = 13
Sim3rdSeason = True
[model]
LAT = 3.16
LUE = 2.47
TSUMI = 362
K = 0.6
SLAC = 0.02
TSUMAN = 1420
TSUMMT = 580.
TBASE = 8.
RGRL = 0.009
(continues on next page)
Fig. 17: The RDRTB parameter (list) defines the relative (daily) death rate of leaves (RDRTMP) as a function of crop
Developmental Stage (DVS)
Fig. 18: The PHOTTB parameter (table) defines the modifying effect of photoperiodicity on the phenological devel-
opment rate as a function of day length (DAYL)
SLACF (real, dimensionless): interpolation table defining the leaf area correction factor (SLACF) as a
function of development stage (DVS; Drenth et al., 1994). The Specific Leaf Area Constant (SLAC) is
multiplied with a correction factor to obtain the specific leaf area (SLA); this correction factor, in turn,
is obtained by linear interpolation in the SLACF table (Figure 3), using the relevant value of DVS as
independent variable.
where self.SLACF.lookup_linear(self.DVS) results in the relevant value of the correction factor by linear
interpolation based on DVS.
Fig. 19: The SLACF interpolation table defines the leaf area correction factor (SLACF) as a function of development
stage (DVS)
FRTTB (real, dimensionless): interpolation table defining the fraction of daily dry matter production
FRTWET = self.FRTTB.lookup_linear(self.DVS)
Fig. 20: The FRTTB interpolation table defines the fraction of daily dry matter production allocated to root growth
(FRTWET) in absence of water shortage as a function of the crop development stage (DVS).
FLVTB (real, dimensionless): interpolation table defining the fraction of daily dry matter production
allocated to growth of leaves (FLVT), in absence of water shortage, as a function of development stage
(DVS). FLVT is obtained by linear interpolation in FLVTB, with DVS as the independent variable:
FLVT = self.FLVTB.lookup_linear(self.DVS)
Fig. 21: The FLVTB interpolation table defines the fraction of daily dry matter production allocated to growth of
leaves (FLVT) in absence of water shortage, as a function of development stage.
FSTTB (real, dimensionless): interpolation table defining the fraction of daily dry matter production
allocated to growth of stems (FSTT), in absence of water shortage, as a function of development stage
(DVS). FSTT is obtained by linear interpolation in FSTTB, with DVS as the independent variable:
Fig. 22: The FSTTB interpolation table defines the fraction of daily dry matter production allocated to growth of stems
(FSTT) in absence of water shortage, as a function of the crop development stage (DVS).
FSOTB (real, dimensionless): interpolation table defining the fraction of daily dry matter production
allocated to growth of stems (FSOT), in absence of water shortage, as a function of development stage
(DVS; Figure 7). FSOT is obtained by linear interpolation in FSOTB, with DVS as the independent
variable:
FSOT = self.FSOTB.lookup_linear(self.DVS)
Fig. 23: The FSOTB interpolation table defines the fraction of daily dry matter production allocated to growth of stems
(FSOT), in absence of water shortage, as a function of the crop development stage (DVS).
Phenology
Phenological development of crops is generally closely related to thermal time, i.e. the accumulated number of degree-
days after emergence. Instead of simply adding up degree-days, a daily effective temperature (DTEFF, °C d) is used
in wflow_lintul however, since many growth processes are only temperature dependent, or only occur, above a certain
threshold temperature. DTEFF is calculated according to:
with DTEFF the daily effective temperature (°C d), Warm_Enough a Boolean variable that equals True when the daily
average temperature T is larger than or equal to TBASE, the base temperature for a rice crop (°C). DegreeDay is the
daily average temperature reduced with TBASE:
Thus, if T is great than TBASE, DTEFF is set equal to DegreeDay; in all other cases it is set to 0. In addition,
before DTEFF is added to TSUM (state variable), it is corrected for the potentially modifying effect of day length
(photoperiodicity) resulting in the daily increase in temperature sum RTSUMP (°C d), according to:
where RTSUMP is the daily increase in temperature sum (°C d), modified by the influence of photoperiod DTEFF
the daily effective temperature (°C d), PHOTPF is a correction factor that accounts for the modifying influence of day
length on crop development via DTEFF; it is defined as a function of day length (DAYL). PHOTPF is less than 1 if
day length (DAYL, hours) is shorter than 10 hours or longer than 12 hours. Day lengths between 10-12 hours have no
modifying influence on DTEFF; phenological development is thus slowed down in such cases. DAYL in wflow_lintul
is calculated by the function astro_py , a Python implementation of a FORTRAN subroutine from the Wageningen
school of models, dating back to Spitters et al. (1989) and likely even further.
Calculation of TSUM (model state variable) by accumulating daily values of RTSUMP is then done according to:
where: DVS_veg: DVS during the vegetative crop stage, CropHarvNow a Boolean variable that equals True when
the crop is mature or has reached a fixed pre-defined harvest date. DVS_veg: DVS during the generative crop stage,
TSUMAN, the value of TSUM at which crop anthesis is initiated (1420 °C d for rice variety IR72) TSUMMT, the
change in TSUM required for the crop to develop from anthesis to maturity (hence ripened rice grain can be harvested)
- 580 °C d for rice variety IR72. DVS, the crop (phenological) development stage.
Hence, DVS is calculated as the sum of DVS_veg and DVS_gen; it equals 1 (flowering) if TSUM reaches TSUMAN
and equals 2 (maturity) if TSUM reaches the sum TSUMAN + TSUMMT. DVS_veg and DVS_gen are both reset
(multiplied with zero) when the crop is harvested.
As outlined in the Introduction, overall assimilate production rate in wflow_lintul is calculated as the product of
the Photosynthetically Active (solar) Radiation (PAR) that is intercepted by the crop canopy and a fixed Light-Use
Efficiency (LUE; Russell et al., 1989). This way of estimating (daily) biomass production in the LINTUL models is
reflected as follows and may be considered the core of the wflow_lintul model:
with GTOTAL the overall daily growth rate of the crop (g m−2 d−1 ), self.LUE a constant light use efficiency (g MJ−1 ),
PARINT the daily intercepted photosynthetically active radiation (MJ m−2 d−1 ), TRANRF (-) the ‘transpiration re-
duction factor’, i.e. the reducing impact of water shortage on biomass production.
The daily intercepted PAR (PARINT,MJ m−2 d−1 ), is calculated as:
with: Not_Finished a Boolean variable that indicates whether the crop is still growing and developing (True) or not
(False). 0.5 a factor to account for the fraction of photosynthetically active radiation in the incident solar radiation.
About 50% (in terms of energy) of the frequency spectrum of incident solar radiation can be used for photosynthesis
by green plants. IRRAD: incident solar radiation (m−2 d−1 ) as measured by e.g. a weather station. K a crop and
variety-specific light extinction coefficient (-) and LAI the leaf area index (m2 leaf m−2 ground).
The overall daily growth of the crop (GTOTAL, g m2 d1 ) is partitioned over growth of leaves, stems, storage organs
(grains in the case of rice) and roots:
where RWLVG, RWST, RWSO and RWRT are the daily growth rates of leaves, stems, storage organs (i.e. rice grains)
and roots, respectively (g m2 d1 ), GTOTAL is the overall daily growth rate of the crop (g m2 d1 ), EMERG is a Boolean
variable that indicates whether crop growth occurs; it equals True, when three conditions are met:
• crop growth has been initiated
• the water content of the soil is larger than the water content at permanent wilting point
• LAI is greater than 0.
FLV, FST, FSO and FRT are the fractions (-) of the overall biomass growth rate (GTOTAL; g m2 d1 ) allocated to leaves,
stems, storage organs (rice grains) and roots, respectively. They are related to the phenological development stage of
the crop (DVS), following relationships defined in the parameter interpolation tables FLVTB, FSTTB, FSOTB, and
FRTTB. If water shortage occurs, growth of belowground and aboveground crop parts is modified with the factors
FRTMOD and FSHMOD, respectively:
:: FLV = FLVT * FSHMOD FST = FSTT * FSHMOD FSO = FSOT * FSHMOD FRT = FRTWET *
FRTMOD
where FLVT, FSTT, FSOT and FRTWET are the (raw) interpolated allocation fractions for leaves, stems, storage
organs (rice grains) and roots, respectively and FLV, FST, FSO and FRT are the final allocated fractions, after modifi-
cation for water shortage (if any). DLV and DRRT are the death rates of leaves and roots, respectively (g m2 d1 ).
Summarizing: as long as crop growth occurs (i.e. if EMERG = True), growth rates of the different plant organs are
calculated by multiplying overall crop growth with certain organ-specific fractions, depending on crop development
with self.WLVG the total green leaf biomass (g m2 ), RDR the relative (daily) death rate of leaves (-).
The daily change in leaf weight as a consequence of the dying of foliage is calculated according to:
with self.WLVG the total roots biomass (g m2 ), RDRRT the relative (daily) death rate of roots.
Now that the net (mass) growth rates of leaves, stems, storage organs (rice grain) and roots are known, their respective
masses (model state variables) can also be calculated, according to:
with:
• WLVG, WST, WSO and WRT the weights of green leaves, dead leaves, stem, storage organs and roots (g m2 )
• CropStartNow: a Boolean variable that equals True on the moment of crop growth initiation
• WLVGI, WSTI, WSOI, WRTLI the initial weights of green leaves, dead leaves, stem, storage organs and roots
(g m2 ), i.e. at the moment of transplanting
• RWLVG, RWST, RWSO and RWRT the daily growth rates of leaves, stems, storage organs (i.e. rice grains) and
roots, respectively (g m2 d1 )
• CropHarvNow a Boolean variable that only equals True on the day that the crop is being harvested
Whereas the daily increase in leaf biomass was already explained above, the daily increase in leaf area (Leaf Area
Index, LAI, m2 m2 ) is simulated as:
with:
• GLAI the (daily) growth in LAI (m2 m2 d1 )
• LetsGro a Boolean variable that, as soon as it becomes “True”, triggers the initiation of crop growth;
• LAII (m2 m2 ) the initial value of LAI (leaf area index)
• Juv_or_Harv a Boolean variable indicating that the crop is juvenile or is being harvested
• RGRL the relative (daily) growth rate of leaf area (-), expressed per degree-day ((°Cd)1)
with LAI the leaf area index (m2 𝑚 : 𝑚𝑎𝑡ℎ : m2 d1 ), DLAI the daily decrease in LAI (DLAI, m2 m2 d1 ) from dying
of leaves, CropHarvNow a Boolean variable that only equals True on the day that the crop is being harvested.
The daily decrease in LAI (DLAI, m2 𝑚 : 𝑚𝑎𝑡ℎ : d1 ) from dying of leaves is, analogous to the calculation of DLV
(the death rate of leaves in terms of mass), calculated as:
with LAI the leaf area index (m2 m2 ; state variable), RDR the relative (daily) decline in LAI due dying of leaves (-).
RDR in turn depends on two terms, the relative (daily) death rate of leaves due to aging (RDRDV, -) and the relative
(daily) death rate of leaves due to mutual shading (RDRSH, -), according to:
The relative (daily) death rate of leaves due to aging (RDRDV, -), in turn, is calculated following a number of steps,
starting with:
with AtAndAfterAnthesis a Boolean variable that equals True if the crop has reached anthesis. Hence, if AtAndAfter-
Anthesis equals True, RDRDV is set equal to RDRTMP; in all other cases it is set to 0. RDRTMP is the relative (daily)
death rate obtained by interpolation in the RDRTB table, with DVS as the independent variable.
The relative (daily) death rate of leaves due to mutual shading (RDRSH, -) is calculated according to:
with RDRSHM is a fixed daily relative death rate due to shading (-), LAICR is the critical LAI above which mutual
shading starts to occur.
Root depth growth can only occur if a number of (logical) conditions are simultaneously met:
where:
• RootGrowth is a Boolean variable indicating whether root depth growth occurs (True) or not (False).
• Enough_water is a Boolean variable indicating whether there is water for crop growth (True) or not
(False, this is the case when soil moisture content is at permanent wilting point).
with self.ROOTD_mm the rooting depth in mm and self.ROOTDM_mm is the maximum rooting depth
in mm
Hence, as long as the roots have not yet reached their maximum depth, CanGroDownward remains True. Actual root
growth only occurs where RootGrowth equals True:
where RROOTD_mm is the daily increase in rooting depth (mm), RootGrowth is a Boolean variable indicating whether
root depth growth occurs (True) or not (False). self.RRDMAX_mm is the maximum daily increase in rooting depth
(mm)
So, if RootGrowth is True, the daily increase in rooting depth (mm) will be set equal to the maximum daily increase
in rooting depth (mm); in all other cases it will be set to zero. There is no simulation of a reducing effect of root death
on rooting depth. Root death only impacts the weight of (living) roots as they are diminished with the (daily) death
rate of roots.
As wflow_lintul is a much simpler rice model than e.g. ORYZA2000 (Bouman et al., 2001), the effect of water stress
on crop growth is also modelled in a simpler way - there is no mechanistic simulation of crop drought responses
such as leaf rolling, or of events such as spikelet sterility. These events can only be taken into account indirectly, by
calibrating model drought response to match observed yield and biomass data – a process that has presently (June
2018) not yet been entirely completed.
A central parameter in modelling the response to water shortage in wflow_lintul is TRANRF, a factor that describes
the reducing effect of water shortage on biomass production, having a direct impact on overall crop production and
leaf area growth. It is calculated according to:
TRANRF = self.Transpiration/NOTNUL_pcr(self.PotTrans)
with self.Transpiration the actual (rice) crop transpiration (mm d-1), self.PotTrans the potential (rice) crop transpiration
(mm d-1), NOTNUL_pcr a Python implementation of an FST intrinsic function (Rappoldt and Van Kraalingen, 1996),
returning 1 if the value between parentheses equals 0, to prevent zero division errors. In all others cases, it returns the
unchanged value.
with TRANRF the ‘transpiration reduction factor’, i.e. the reducing impact of water shortage on biomass production
Similarly, shoot growth is modified with a factor FSHMOD. FSHMOD is calculated according to:
with FRT the final allocated fraction of total crop growth allocated to the roots, after modification for water shortage.
FRTMOD a factor describing the modifying effect of drought stress on root growth.
References
• Bouman, B.A.M., van Keulen, H., van Laar, H.H., Rabbinge, R., 1996. The ‘School of de Wit’ crop growth
simulation models: a pedigree and historical overview. Agric. Syst. 52, 171/198.
• Bouman, B.A.M., M.J. Kropff, T.P. Tuong, M.C.S. Wopereis, H.F.M. ten Berge, and H.H. van Laar. 2001.
ORYZA2000: Modelling lowland rice. 235 p. International Rice Research Institute, Los Baños, Philippines,
Wageningen University and Research Centre, Wageningen, The Netherlands.
• Drenth, H., Ten Berge, H.F.M. and Riethoven, J.J.M.(Editors). ORYZA simulation modules for potential and
nitrogen limited rice production. SARP Research Proceedings-December, 1994. DLO-Research Institute for
Agrobiology and Soil fertility, Wageningen, WAU-Department of Theoretical Production Ecology, Wageningen,
IRRI-International Rice Research Institute, Los Banos, Pages: 197-210.
• Farré, I., Van Oijen, M., Leffelaar, P.A., Faci, J.M., 2000. Analysis of maize growth for different irrigation
strategies in northeastern Spain. Eur. J. Agron. 12, 225–238.
• Goudriaan, J. and van Laar, H. H., 1994. Modelling Potential Crop Growth Processes. Kluwer Academic
Publishers, Dordrecht, The Netherlands, 1994. pp. 238.
• Habekotté, B., 1997. Description, parameterization and user guide of LINTULBRASNAP 1.1. A crop growth
model of winter oilseed rape (Brassica napus L.). In: Quantitative Approaches in Systems Analysis No. 9.
Wageningen Agricultural University, Wageningen, The Netherlands, 40 pp.
• Van Ittersum, M.K., Rabbinge, R., 1997. Concepts in production ecology for analysis and quantification of
agricultural input-output combinations. Field Crops Research 52 (1997) 197-208
• Van Kraalingen, D.W.G., 1995. The FSE system for crop simulation, version 2.1. Quantitative Approaches in
Systems Analysis, No. 1. C.T. de Wit Graduate School for Production Ecology and Resource Conservation,
Wageningen University, The Netherlands, pp. 58.
• Monteith, J. L. (1969). Light interception and radiative exchange in crop stands. In Physiological aspects of
crop y,ie/d, eds J. D. Easton, F. A. Haskins, C. Y. 194 B. A. M. Bouman, H. van Keulen, H. H. van Laar, R.
Rabbinge Sullivan & C. H. M. van Bavel. American Society of Agronomy, Madison, Wisconsin. pp. 89-l 11.
• Monteith, J. L. (1990). Conservative behaviour in the response of crops to water and light. In Theoretical
Production Ecology: reflection and prospects, eds. R. Rabbinge, J. Goudriaan, H. van Keulen, F. W. T. Penning
de Vries & H. H. van Laar. Simulation Monographs, PUDOC, Wageningen, The Netherlands. pp. 3-16.
A number of settings of the framework can be set in the ini file for each model. The settings are explained in the
section below.
Information for the current run can be given in the run section. Here the start and end-time of the run as well as the
timestep can be given. Alternatively a link to a Delft-FEWS runinfo.xml file can be given. An example is shown
below.
[run]
#either a runinfo file or a start and end-time are required
#runinfofile=runinfo.xml
(continues on next page)
If this section is not present and a runinfo.xml is also not used you will need to specify the number of timesteps using
the -T option on the command line (for most models).
The in/output file formats can be specified in the framework section. At present only pcraster mapstacks and netcdf
are available fot input. See the supplied pcr2netcdf.py script for information on the layout of the netcdf files. If netcdf
files are used the name of the mapstack is used as the standardname in the netcdf file.
[framework]
# outputformat for the *dynamic* mapstacks (not the states and summary maps)
# 1: pcraster
# 2: numpy
# 3: matlab
outputformat=1
As can be seen from the example above a number of input/ouput streams can be switch on to work with netcdf files.
These are:
• netcdfinput. Time dependant input. This does not work for climatology files at the moment
• netcdfoutput. Time dependant output.
• netcdfstaticoutput. Summary output at the end of a run, those that normally end up in the outsum directory
• netcdfstatesoutput. The model’s state variables at the end of a run.
• netcdfstatesinput. The model’s input state variables at the start of a run.
To enhance performance when writing netcdf files a netcdfwritebuffer can be set. The number indicates the number
of timesteps to keep in memory before flusing the buffer. Setting the buffer to a large value may induce memory
problems.
In the ini file example below several variables are configured to be available via the API. For most settings this only
defines what the API will expose to the outside world. However, if you specify 0 (input) as a role for one of the forcing
variables the wf_readmap function will no longer read maps from disk for that variable but will return the contents
of that variable via the API.
The API section specifies variables that are exposed via the api. Use the following convention:
variable_name_in_model=variable_role,variable_unit
Use role 0 for input maps to the model (forcing data that are normally read from disk) only, role 1 for outputs, role 2
for state variables and role 3 for model parameters. The units may be choose freely and be strings also.
example:
[API]
FreeWater=2,4
SoilMoisture=2,4
UpperZoneStorage=2,4
LowerZoneStorage=2,4
InterceptionStorage=2,4
SurfaceRunoff=2,m^3/sec
WaterLevel=2,2
DrySnow=2,4
Percolation=1,0
ForecQ_qmec=0,1
PERC=3,5
FC=3,4
# Below are the forcing variables. By putting these here you MUST
# supply them via the API, if not these will default to 0.0
#P=0,0
PET=0,0
TEMP=0,3
Most of the time this section is not needed as this will mostly be configured in the python code by the model developer.
However, in some case this section can be used alter the model for example force the model to read RootingDepth
from an external data source. Not all models support this. You can check if the model you uses support this by looking
for the wf_updateparameters() function in de model code.
The format of entries in this section is as follows:
name=stack,type,default,verbose,[lookupmap_1],[lookupmap_2],lookupmap_n]
[modelparameters]
RootingDepth=monthlyclim/ROOTS,monthlyclim,75,0
# Force the model to read monthly climatology of P
Precipitation=inmaps/P,monthlyclim,0.0,1
Example:
[modelparameters]
RootingDepth=monthlyclim/ROOT,monthyclim,100,1
Sl=inmaps/clim/LCtoSpecificLeafStorage.tbl,tbl,0.5,1,inmaps/clim/LC.map
Kext=inmaps/clim/LCtoSpecificLeafStorage.tbl,tbl,0.5,1,inmaps/clim/LC.map
Swood=inmaps/clim/LCtoBranchTrunkStorage.tbl,tbl,0.5,1,inmaps/clim/LC.map
LAI=inmaps/clim/LAI,monthlyclim,1.0,1
In the two sections “variable_change_timestep” and “variable_change_once” you can set operations on parameters and
variable that are executed at the start of each timestep or once in the initialisation of the model respectively. What you
specify here should be valid python code and include variable that exists in the model you are using. This only works
if the actual model you are using includes the wf_multparameters() function. At the moment wflow_hbv, wflow_sbm,
wflow_w3ra and wflow_routing include this. See below for a configuration example. Some models may also support
this via the -P and -p command-line options.
[variable_change_timestep]
self.Precipitation = self.Precipitation * 1.2
# Mutiplies the precipitation input times 1.2 every timestep
[variable_change_once]
self.PathFrac = self.PathFrac * 1.1
# Increases the paved area fraction of the model by 10 %
The rollingmean section allows you to define a rolling mean for each variable in the model. This variable can be used
by other applications (e.g. data assimilation) or you can report it as output. Example:
[rollingmean]
self.Surfacerunoff=12
The above will make a 12 timestep rollingmean and store this in the variable self.Surfacerunoff_mean_12
By adding variable in one or several of these sectiosn the framework will save these variables to disk (using the value
at the end, sum, min, max or avg) at the end of a run.
the available sections are:
• summary - Saves the actual value of the variable
• summary_avg - Saves the average value over all timesteps of the variable
• summary_sum - Saves the sum over all timesteps of the variable
• summary_min - Saves the minimum value over all timesteps of the variable
• summary_max - Saves the maximum value over all timesteps of the variable
All maps are saved in the outsum directory of the current runid.
Example:
[summary]
self.MaxLeakage=MaxLeakage.map
# Save and average these per LU type
[summary_sum]
self.Precipitation=Sumprecip.map
[summary_max]
self.Precipitation=maxprecip.map
(continues on next page)
[summary_min]
self.Temperature=mintemp.map
[summary_avg]
self.Precipitation=avgprecip.map
[outputcsv_0-n] [outputtss_0-n]
Number of sections to define output timeseries in csv format. Each section should at least contain one samplemap item
and one or more variables to save. The samplemap is the map that determines how the timeseries are averaged/sampled.
The function key specifies how the data is sample: average(default), minimum, maximum, total, majority. The time-
format key can either be steps or datetime.
All other items are variable=filename pairs. The filename is given relative to the case directory.
Example:
[outputcsv_0]
samplemap=staticmaps/wflow_subcatch.map
self.SurfaceRunoffMM=Qsubcatch_avg.csv
function=average
# average is the default
timeformat = datetime
# steps is the default
[outputcsv_1]
samplemap=staticmaps/wflow_gauges.map
self.SurfaceRunoffMM=Qgauge.csv
self.WaterLevel=Hgauge.csv
[outputtss_0]
samplemap=staticmaps/wflow_landuse.map
self.SurfaceRunoffMM=Qlu.tss
function=total
In the above example the discharge of this model (self.SurfaceRunoffMM) is saved as an average per subcatchment, a
sample at the gauge locations and as an average per landuse.
The run section can contain information about the model timesteps, the date/time range, how to initialize the model
and how interpret the forcing data.
[run]
# either a runinfo file or a start and end-time are required
#runinfo=runinfo.xml
starttime=2016-01-01 00:00:00
endtime=2016-01-04 00:00:00
# Base timestep of the model in seconds, default = 86400
(continues on next page)
The original pcraster framework has no notion of date and time, only timesteps that are used to propagate a model
forward. However, to be able to support the BMI and netcdf files date and time functionality has been inserted into the
framework.
As most of the forcing in hydrological modelling is an accumulation the date/time of an input time series is assumed
to represent the total (or average) of that variable one timestep length back in time from the timestamp.
For example, if the forcing data has the following four timestamps the model will run four timesteps. The first timesteps
will be to propagate the state from T0 to T1 (in that case the date/time of the state files is assumed to be T0). As such
the state going into the model should be valid for the T0, see graph below:
Here the first column shows the model steps and the second column the timestamp of the input/output data for that
timestep (a empty box means there is nothing for that step). The first empty-row is regarded as the timestamp of the
initial conditions. The first forcing data is used to propagate the model from T0 (the state, 31 Dec 2015) to T1 (1 Jan
2016) which will also be the first output the model writes.
The above corresponds to the following date/time in the [run] section:
[run]
# either a runinfo file or a start and end-time are required
#runinfo=runinfo.xml
starttime=2016-01-01 00:00:00
endtime=2016-01-04 00:00:00
# required, base timestep of the model
timestepsecs = 86400
#start model with cold state
(continues on next page)
The above shows the default behaviour of the framework. For each data point in the input forcing a model steps is
performed. The ‘runlengthdetermination’ variable in the run section can also be set to ‘intervals’. In that case the
number of steps is determined from the number of intervals in the forcing data. Hence, the following run will be
performed:
[run]
# either a runinfo file or a start and end-time are required
#runinfo=runinfo.xml
starttime=2016-01-01 00:00:00
endtime=2016-01-04 00:00:00
# required, base timestep of the model
timestepsecs = 86400
#start model with cold state
reinit=1
# Default behaviour: steps
runlengthdetermination=intervals
In this case the forcing data has the same input as in the previous case (4 timestamps) but in this case the first timestamp
is regarded as the time of the initial conditions. As such, one timestep less (now three in total) is performed and the
forcing data from 01 Jan 2016 is NOT used as it is regarded as the initial state. The first output point will be 02 Jan
2016 generated from the forcing marked as 2 Jan 2016.
The same applies when the start and end time of the model run are supplied via the bmi interface.
All items in this section are copied as global attributes into the netcdf output file. Example:
[netcdfmetadata]
license=https://opendatacommons.org/licenses/odbl/
note=Test runs, results are not final
Introduction
To run the model from Delft-FEWS the following actions need to be performed:
• The runinfo.xml file should be specified in the [run]section of the ini file
• The use of netcdf input and output should be switched on
• The postadapter (wflow_adapt.py) needs to be run after the wflow run
The postadapter also converts the log messages of the model into Delft-FEWS diagnostics XML format.
• Casenamerunidwflow.log is converted to wflow_diag.xml
• Also the adapter log files is converted to wflow_adapt_diag.xml
Command line arguments:
An example of executing wflow from the Delft-FEWS general adapter is shown below:
<executeActivities>
<executeActivity>
<description>Run wflow</description>
<command><executable>bin-wflow\wflow_sbm.exe</executable></command>
<arguments>
<argument>-C</argument>
<argument>rhine</argument>
<argument>-f</argument>
</arguments>
<timeOut>7200000</timeOut>
</executeActivity>
<executeActivity>
<description>Run wflow post</description>
<command> <executable>bin-wflow\wflow_adapt.exe</executable> </command> <arguments>
<argument>-M</argument>
<argument>Post</argument>
<argument>-s</argument>
(continues on next page)
The wflow_adapt module can also be used by other programs to convert .tss files to pi-xml vv. Below the API
documentation of the module is given.
In the above example the state files belonging to the model should be configed as per below in the General Adapter
XML. In Fews the read and write locations are as viewed from the model’s point of view:
<stateLocation>
<readLocation>WaterLevel.map</readLocation>
<writeLocation>../run_default/outstate/WaterLevel.map</writeLocation>
</stateLocation>
# Repeat for all state variables
wflow_adapt.py: Simple wflow Delft-FEWS adapter in python. This file can be run as a script from the command-line
or be used as a module that provides (limited) functionality for converting PI-XML files to .tss and back.
Usage pre adapter:
wflow_adapt -M Pre -t InputTimeseriesXml -I inifile
Usage postadapter:
wflow_adapt-M Post -t InputTimeseriesXml -s inputStateFile -I inifile -o outputStateFile -r runinfofile -w
workdir -C case [-R runId]
Issues:
• Delft-Fews exports data from 0 to timestep. PCraster starts to count at 1. Renaming the files is not desireable.
The solution is the add a delay of 1 timestep in the GA run that exports the mapstacks to wflow.
• Not tested very well.
• There is a considerable amount of duplication (e.g. info in the runinfo.xml and the .ini file that you need to
specify again :-())
$Author: schelle $ $Id: wflow_adapt.py 915 2014-02-10 07:33:56Z schelle $ $Rev: 915 $
wflow_adapt.getEndTimefromRuninfo(xmlfile)
Gets the endtime of the run from the FEWS runinfo file
Warning:
This function does not fully parse the xml file and will only work properly
if the xml files date the dateTime element written on one line.
wflow_adapt.pixml_totss(nname, outputdir)
Converts and PI xml timeseries file to a number of tss files.
The tss files are created using the following rules:
• tss filename determined by the content of the parameter element with a “.tss” postfix
• files are created in “outputdir”
• multiple locations will be multiple columns in the tss file written in order of appearance in the XML file
wflow_adapt.pixml_totss_dates(nname, outputdir)
Gets Date/time info from XML file and creates .tss files with:
• Day of year
• Hour of day
• Others may follow
wflow_adapt.setlogger(logfilename, loggername, thelevel=20)
Set-up the logging system and return a logger object. Exit if this fails
wflow_adapt.tss_topixml(tssfile, xmlfile, locationname, parametername, Sdate, timestep)
Converts a .tss file to a PI-xml file
Introduction
The wflow_fit module provides simple automated least square fitting for the wflow models. It uses the scipy.optimize
function to perform the fitting.
The program works mu multipling the fit parameter with a factor and optimise this factor. To get the new optimised
parameters for your model you have to multiply your original parameters with the optimised factor. You can specify
measured and simulated Q pairs to use and which area of the model you wan to adjust for each Simulated/Measured
pair
In order to use the fit module you must have a:
• A working wflow model
• a tss file with measured discharge
• an [fit] section in the ini file
To be able to use the fit module you must add a [fit] section to the .ini file of the wflow model you want to fit.
[fit]
# The parameters are name parameter_0 to parameter_n
parameter_0 = M
parameter_1 = RootingDepth
# Q specifies the tss file with measure discharge data
# the path is relative to the case directory
Q = testing.tss
# The columns in the measured Q you want to fit to
ColMeas = [1,5]
# The columns in the measured Q you want to fit
ColSim = [1,5]
# Number of warup timesteps. This are not used in fitting
WarmUpSteps = 1
# The map defining the areas you want to adjust
areamap=staticmaps/wflow_catchment.map
# The areas you want to adjust for each Qmeas/Qsim combination
areacode=[1,5]
Fitting results
Results are saved in the wflow_fit.res file in the case/runid directory. In addition, the program saves a graph of modelled
and observed data in the file fit.png and maps of the original and fitted parameters are also saved.
If you specify the -U option the resulting maps are saved in the staticmaps directory after each steps. As such, next
steps (if you calibrate multiple subcatchments/areas) also include the results of the previous steps. Note that this will
overwrite your maps if you already have those!
Although wflow_sbm has a fairly large number of parameters most should not be fitted automatically. The parameters
that are most suited for fitting are:
• M
• FirstZoneKsatVer
• RunoffGeneratingGWPerc (if this is switched on. It is usually best to first setup the model without this parame-
ter!)
• RootingDepth
It is recommended to only fit one or two parameters at one time.
The wflow_rhine_sbm example can be used to test the fitting procedure.
wflow_fit.py -M wflow\_sbm -T 300 -C wflow\rhine\_sbm
-U: save the map after each step ti the input (staticmaps) dir so
that next steps (colums) use the previous results
For this program to work you must add a [fit] section to the ini file of the program to fit (e.g. the wflow_hbv program)
$Author: schelle $ $Id: wflow_sbm.py 669 2013-05-16 05:25:48Z schelle $ $Rev: 669 $
wflow_fit.configget(config, section, var, default)
gets parameter from config file and returns a default value if the parameter is not found
class wflow_fit.wfmodel_fit_API(startTime, stopTime, casename, runId='_fitrun',
modeltofit='wflow_sbm', config='wflow_sbm.ini',
clonemap='wflow_subcatch.map')
Class that initializes and runs a wflow model
multVarWithPar(pars)
Multiply a parameter in the model with the fit parameters. Use a map to limit the area to adjust
The goal of this module is to make a series functions to upscale maps (DEM) and to maintain as much of the informa-
tion in a detailled dem when upscaling to a coarser DEM. These include:
• river length (per cell)
• river network location
• elevation distribution
• other terrain analysis
the wflow_prepare scripts use this library extensively.
$Author: schelle $ $Id: wflow_lib.py 808 2013-10-04 19:42:43Z schelle $ $Rev: 808 $
wflow_lib.Gzip(fileName, storePath=False, chunkSize=1048576)
Usage: Gzip(fileName, storePath=False, chunksize=1024*1024) Gzip the given file to the given storePath and
then remove the file. A chunk size may be selected. Default is 1 megabyte Input:
fileName: file to be GZipped storePath: destination folder. Default is False, meaning the file will
be zipped to its own folder chunkSize: size of chunks to write. If set too large, GZip will fail with
memory problems
wflow_lib.area_percentile(inmap, area, n, order, percentile)
calculates percentile of inmap per area n is the number of points in each area, order, the sorter order of inmap
per area (output of areaorder(inmap,area)) n is the output of pcr.areatotal(pcr.spatial(pcr.scalar(1.0)),area)
Input:
• inmap
• area map
• n
• order (riverorder)
• percentile
Output:
• percentile map
wflow_lib.area_river_burnin(ldd, dem, order, Area)
Calculates the lowest values in as DEM for each erea in an area map for river of order order
Input:
• ldd
• dem
rivers=None – you can provide a rivers layer here. Pixels that are identified as river should have
a value > 0, other pixels a value of zero.
basin=None – set a boolean pcraster map where areas with True are estimated using the nearest drain in ldd distance
and areas with False by means of the nearest friction distance. Friction distance estimated using
the upstream area as weight (i.e. drains with a bigger upstream area have a lower friction) the
spreadzone operator is used in this case.
output: stream_ge – pcraster object, streams of strahler order ge threshold subcatch – pcraster object, subcatch-
ments of strahler order ge threshold
wflow_lib.sum_list_cover(list_of_maps, covermap)
Sums a list of pcrastermap using cover to fill in missing values
Parameters
• list_of_maps – list of maps to sum
• covermap – maps/ value to use fro cover
Returns sum of list of maps (single map)
wflow_lib.upscale_riverlength(ldd, order, factor)
Upscales the riverlength using ‘factor’ The resulting maps can be resampled (e.g. using resample.exe) by factor
and should include the accurate length as determined with the original higher resolution maps. This function is
depricated, use are_riverlength instead as this version is very slow for large maps
Introduction
wflow_funcs is a library of hydrological modules that can be used by any of the wflow models. It includes modules
related to:
• the kinematic wave routing for surface and subsurface flow
• rainfall interception by the vegetation
• snow and glaciers modelling
• reservoirs and lakes modelling.
Kinematic Wave
The main flow routing scheme used by the wflow models (wflow_sbm and wflow_hbv) is the kinematic wave approach
for channel and overland flow, assuming that the topography controls water flow mostly. The kinemative wave equa-
𝑑𝑄 𝑑𝐴
tions are (Chow, 1988): + = 𝑞 and 𝐴 = 𝛼 * 𝑄𝛽 . These equations can then be combined as a function of
𝑑𝑥 𝑑𝑡
streamflow only:
𝑑𝑄 𝑑𝑄
+ 𝛼 * 𝛽 * 𝑄𝛽−1 * =𝑞
𝑑𝑥 𝑑𝑡
where 𝑄 is the surface runoff in the kinematic wave [m3 /s], 𝑥 is the length of the runoff pathway [m], 𝐴 is the
cross-section area of the runoff pathway [m2 ], 𝑡 is the integration timestep [s] and alpha and beta are coefficients.
These equations are solved with a nonlinear scheme using Newton’s method and can also be iterated depending on the
wflow models space and time resolutions. By default, the iterations are performed until a stable solution is reached
(epsilon < 10−12 ). For larger models, the number of iterations can also be fixed for wflow_sbm to a specific sub-
timestep (in seconds) for both overland and channel flows to improve simulation time. To enable (fixed or not)
iterations of the kinematic wave the following lines can be inserted in the ini files of the related models:
[model]
# Enable iterations of the kinematic wave
kinwaveIters = 1
# Fixed sub-timestep for iterations of channel flow (river cells)
(continues on next page)
For wflow_sbm and wflow_hbv Manning’s N values for the river can be specified through a N_River.tbl file in two
ways:
• N values are linked to land cover (col 1), sub catchment (col 2) and soil type (col 3) (default (NRiverMethod ==
1))
• N values are linked to streamorder (col 1) (NRiverMethod == 2)
To link N values to streamorder insert the following in the ini file:
[model]
nrivermethod = 2
In wflow_sbm the kinematic wave approach is used to route subsurface flow laterally. The saturated store 𝑆 can be
drained laterally by saturated downslope subsurface flow per unit width of slope 𝑤 [mm] according to:
𝐾0 tan(𝛽) (−𝑓 𝑧𝑖 )
𝑞= (𝑒 − 𝑒(−𝑓 𝑧𝑡 ) )
𝑓
where 𝛽 is element slope angle [deg.], 𝑞 is subsurface flow [𝑚𝑚2 /𝑡], 𝐾0 is the saturated hydraulic conductivity at the
soil surface [mm/t], 𝑧𝑖 is the water table depth [mm], 𝑧𝑡 is total soil depth [mm], and 𝑓 is a scaling parameter [𝑚𝑚−1 ]:
𝜃𝑠 − 𝜃𝑟
𝑓=
𝑀
where 𝜃𝑠 is saturated water content [mm/mm] and 𝜃𝑟 is residual water content [mm/mm] and 𝑀 represents a model
parameter [mm], that determines the decrease of vertical saturated conductivity with depth.
Combining with the following continuity equation:
𝜕ℎ 𝜕𝑞
(𝜃𝑠 − 𝜃𝑟 ) = −𝑤 + 𝑤𝑟
𝜕𝑡 𝜕𝑥
where ℎ is the water table height [mm], 𝑥 is the distance downslope [mm], and 𝑟 is the netto input rate [mm/t] to the
saturated store.
𝜕𝑞
and substituting for ℎ( 𝜕ℎ ), gives:
𝜕𝑞 𝜕𝑞
𝑤 = −𝑐𝑤 + 𝑐𝑤𝑟
𝜕𝑡 𝜕𝑥
𝐾0 tan(𝛽) (−𝑓 𝑧𝑖 )
where celerity 𝑐 = (𝜃𝑠 −𝜃𝑟 ) 𝑒
The kinematic wave equation for lateral subsurface flow is solved iteratively using Newton’s method.
Note: For the lateral subsurface flow kinematic wave the model timestep is not adjusted. For certain model timestep
and model grid size combinations this may result in loss of accuracy.
Both the Gash and Rutter models are available in the wflow framework to estimate rainfall interception by the veg-
etation. The selection of an interception model depends on the simulation timestep. These modules are used by the
wflow_sbm model.
The analytical model of rainfall interception is based on Rutter’s numerical model. The simplifications that introduced
allow the model to be applied on a daily basis, although a storm-based approach will yield better results in situations
with more than one storm per day. The amount of water needed to completely saturate the canopy is defined as:
[︂ ]︂
−𝑅𝑆 𝐸𝑤
𝑃′ = 𝑙𝑛 1 − (1 − 𝑝 − 𝑝𝑡 )−1
𝐸𝑤 𝑅
where 𝑅 is the average precipitation intensity on a saturated canopy and 𝐸 𝑤 the average evaporation from the wet
canopy and with the vegetation parameters 𝑆, 𝑝 and 𝑝𝑡 as defined previously. The model uses a series of expressions
to calculate the interception loss during different phases of a storm. An analytical integration of the total evaporation
and rainfall under saturated canopy conditions is then done for each storm to determine average values of 𝐸 𝑤 and 𝑅.
The total evaporation from the canopy (the total interception loss) is calculated as the sum of the components listed in
the table below. Interception losses from the stems are calculated for days with 𝑃 ≥ 𝑆𝑡 /𝑝𝑡 . 𝑝𝑡 and 𝑆𝑡 are small and
neglected in the wflow_sbm model.
Table: Formulation of the components of interception loss according to Gash:
∑︀𝑚
For 𝑚 small storms (𝑃𝑔 < 𝑃 ′ 𝑔 ) (1 − 𝑝 − 𝑝𝑡 ) 𝑗=1 𝑃𝑔,𝑗
Wetting up the canopy in 𝑛 large storms (𝑃𝑔 ≥ 𝑃 ′ 𝑔 ) 𝑛(1 − 𝑝 − 𝑝𝑡 )𝑃 ′ 𝑔 − 𝑛𝑆
∑︀𝑛
Evaporation from saturated canopy during rainfall 𝐸/𝑅 𝑗=1 (𝑃𝑔,𝑗 − 𝑃 ′ 𝑔 )
Evaporation after rainfall ceases for 𝑛 large storms 𝑛𝑆
Evaporation from trunks in 𝑞 storms that fill the trunk storage 𝑞𝑆𝑡
∑︀𝑚+𝑛−𝑞
Evaporation from trunks in (𝑚 + 𝑛 − 𝑞) storms that do not fill the trunk storage 𝑝𝑡 𝑗=1 𝑃𝑔,𝑗
In applying the analytical model, saturated conditions are assumed to occur when the hourly rainfall exceeds a certain
threshold. Often a threshold of 0.5 mm/hr is used. 𝑅 is calculated for all hours when the rainfall exceeds the threshold
to give an estimate of the mean rainfall rate onto a saturated canopy.
Gash (1979) has shown that in a regression of interception loss on rainfall (on a storm basis) the regression coefficient
should equal to 𝐸 𝑤 /𝑅. Assuming that neither 𝐸 𝑤 nor 𝑅 vary considerably in time, 𝐸 𝑤 can be estimated in this way
from 𝑅 in the absence of above-canopy climatic observations. Values derived in this way generally tend to be (much)
higher than those calculated with the penman-monteith equation.
The model can determine the Gash parameters from an LAI maps. In order to switch this on you must define the LAI
variable to the model (as in the example below).
[modelparameters]
LAI=inmaps/clim/LAI,monthlyclim,1.0,1
Sl=inmaps/clim/LCtoSpecificLeafStorage.tbl,tbl,0.5,1,inmaps/clim/LC.map
Kext=inmaps/clim/LCtoExtinctionCoefficient.tbl,tbl,0.5,1,inmaps/clim/LC.map
Swood=inmaps/clim/LCtoBranchTrunkStorage.tbl,tbl,0.5,1,inmaps/clim/LC.map
𝐶𝑚𝑎𝑥(𝑙𝑒𝑎𝑣𝑒𝑠) = 𝑆𝑙 * 𝐿𝐴𝐼
The table below shows lookup table for Sl (as determined from Pitman 1986, Lui 1998) and GlobCover land cover
map.
190 0.04 Artificial surfaces and associated areas (Urban areas >50%)
200 0.04 Bare areas
210 0.04 Water bodies
220 0.04 Permanent snow and ice
230 - No data (burnt areas, clouds,...)
To get to total storage (Cmax) the woody part of the vegetation also needs to be added. This is done via a simple
lookup table between land cover the Cmax(wood):
Sl LAI (monthly)
Cmax (leaves)
add
Cmax
The table below relates the land cover map to the woody part of the Cmax.
11 0.01 Post-flooding or irrigated croplands (or aquatic)
14 0.0 Rainfed croplands
20 0.01 Mosaic cropland (50-70%) / vegetation (grassland/shrubland/forest) (20-50
˓→ %)
30 0.01 Mosaic vegetation (grassland/shrubland/forest) (50-70%) / cropland (20-50
˓→ %)
(continues on next page)
180 0.01 Closed to open (>15%) grassland or woody vegetation on regularly flooded
˓→or waterlogged soil - Fresh, brackish or saline water
190 0.01 Artificial surfaces and associated areas (Urban areas >50%)
200 0.0 Bare areas
210 0.0 Water bodies
220 0.0 Permanent snow and ice
230 - No data (burnt areas, clouds,...)
The canopy gap fraction is determined using the k: extinction coefficient (van Dijk and Bruijnzeel 2001):
180 0.6 Closed to open (>15%) grassland or woody vegetation on regularly flooded
˓→or waterlogged soil - Fresh, brackish or saline water
For subdaily timesteps the model uses a simplification of the Rutter model. The simplified model is solved explicitly
and does not take drainage from the canopy into account.
def rainfall_interception_modrut(Precipitation, PotEvap, CanopyStorage,
˓→CanopyGapFraction, Cmax):
"""
Interception according to a modified Rutter model. The model is solved
explicitly and there is no drainage below Cmax.
Returns:
- NetInterception: P - TF - SF (may be different from the actual wet canopy
˓→evaporation)
- ThroughFall:
- StemFlow:
- LeftOver: Amount of potential eveporation not used
- Interception: Actual wet canopy evaporation in this thimestep
- CanopyStorage: Canopy storage at the end of the timestep
"""
##########################################################################
# Interception according to a modified Rutter model with hourly timesteps#
##########################################################################
p = CanopyGapFraction
pt = 0.1 * p
# Now do the Evap, make sure the store does not get negative
dC = -1 * pcr.min(CanopyStorage, PotEvap)
CanopyStorage = CanopyStorage + dC
LeftOver = PotEvap + dC
# Amount of evap not used
# Calculate throughfall
ThroughFall = DD + D + p * Precipitation
StemFlow = Precipitation * pt
Snow and glaciers processes, from the HBV model, are available in the wflow framework. The snow and glacier
functions are used by the wflow_sbm model, and the glacier function is used by the wflow_hbv model.
Snow modelling
If the air temperature, 𝑇𝑎 , is below a user-defined threshold 𝑇 𝑇 (≈ 0𝑜 𝐶) precipitation occurs as snowfall, whereas it
occurs as rainfall if 𝑇𝑎 ≥ 𝑇 𝑇 . A another parameter 𝑇 𝑇 𝐼 defines how precipitation can occur partly as rain of snowfall
(see the figure below). If precipitation occurs as snowfall, it is added to the dry snow component within the snow pack.
Otherwise it ends up in the free water reservoir, which represents the liquid water content of the snow pack. Between
the two components of the snow pack, interactions take place, either through snow melt (if temperatures are above
a threshold 𝑇 𝑇 ) or through snow refreezing (if temperatures are below threshold 𝑇 𝑇 ). The respective rates of snow
melt and refreezing are:
𝑄𝑚 = 𝑐𝑓 𝑚𝑎𝑥(𝑇𝑎 − 𝑇 𝑇 ) ; 𝑇𝑎 > 𝑇 𝑇
𝑄𝑟 = 𝑐𝑓 𝑚𝑎𝑥 * 𝑐𝑓 𝑟(𝑇 𝑇 − 𝑇𝑎 ) ; 𝑇𝑎 < 𝑇 𝑇
where 𝑄𝑚 is the rate of snow melt, 𝑄𝑟 is the rate of snow refreezing, and $cfmax$ and $cfr$ are user defined model
parameters (the melting factor 𝑚𝑚/(𝑜 𝐶 * 𝑑𝑎𝑦) and the refreezing factor respectively)
Note: The FoCFMAX parameter from the original HBV version is not used. instead the CFMAX is presumed to be
for the landuse per pixel. Normally for forested pixels the CFMAX is 0.6 {*} CFMAX
The air temperature, 𝑇𝑎 , is related to measured daily average temperatures. In the original HBV-concept, elevation dif-
ferences within the catchment are represented through a distribution function (i.e. a hypsographic curve) which makes
the snow module semi-distributed. In the modified version that is applied here, the temperature, 𝑇𝑎 , is represented in
a fully distributed manner, which means for each grid cell the temperature is related to the grid elevation.
The fraction of liquid water in the snow pack (free water) is at most equal to a user defined fraction, 𝑊 𝐻𝐶, of the
water equivalent of the dry snow content. If the liquid water concentration exceeds 𝑊 𝐻𝐶, either through snow melt
or incoming rainfall, the surpluss water becomes available for infiltration into the soil:
where 𝑄𝑖𝑛 is the volume of water added to the soil module, 𝑆𝑊 is the free water content of the snow pack and 𝑆𝐷 is
the dry snow content of the snow pack.
Glacier processes can be modelled if the snow model is enabled in wflow_sbm. For wflow_hbv snow modelling is not
optional. Glacier modelling is very close to snow modelling and considers two main processes: glacier build-up from
snow turning into firn/ice (using the HBV-light model) and glacier melt (using a temperature degree-day model).
The definition of glacier boundaries and initial volume is defined in three staticmaps. GlacierAreas is a map containing
the ID of the glacier present in the wflow cell. GlacierFrac is a map that gives the fraction of each grid cell covered by
a glacier as a number between zero and one. GlacierStore is a state map that gives the amount of water (in mm w.e.)
within the glaciers at each gridcell. Because the glacier store (GlacierStore.map) cannot be initialized by running the
model for a couple of years, a default initial state map should be supplied by placing a GlacierStore.map file in the
staticmaps directory. These three maps are prepared from available glacier datasets.
First, a fixed fraction of the snowpack on top of the glacier is converted into ice for each timestep and added to the
glacier store using the HBV-light model (Seibert et al.,2017). This fraction, defined in the lookup table G_SIfrac,
typically ranges from 0.001 to 0.006.
Then, when the snowpack on top of the glacier is almost all melted (snow cover < 10 mm), glacier melt is enabled
and estimated with a degree-day model. If the air temperature, 𝑇𝑎 , is below a certain threshold 𝐺_𝑇 𝑇 (≈ 0𝑜 𝐶)
precipitation occurs as snowfall, whereas it occurs as rainfall if 𝑇𝑎 ≥ 𝐺_𝑇 𝑇 .
With this the rate of glacier melt in mm is estimated as:
where 𝑄𝑚 is the rate of glacier melt and 𝐺_𝐶𝑓 𝑚𝑎𝑥 is the melting factor in 𝑚𝑚/(𝑜 𝐶 * 𝑑𝑎𝑦). Parameters G_TT and
G_Cfmax are defined in two lookup tables. G_TT can be taken as equal to the snow TT parameter. Values of the
melting factor normally varies from one glacier to another and some values are reported in the literature. G_Cfmax
can also be estimated by multiplying snow Cfmax by a factor between 1 and 2, to take into account the higher albedo
of ice compared to snow.
Glacier modelling can be enabled by including the following four entries in the modelparameters section:
[modelparameters]
GlacierAreas = staticmaps/wflow_glacierareas.map,staticmap,0.0,0
GlacierFrac = staticmaps/wflow_glacierfrac.map,staticmap,0.0,0
G_TT = intbl/G_TT.tbl,tbl,0.0,1,staticmaps/wflow_glacierareas.map
G_Cfmax = intbl/G_Cfmax.tbl,tbl,3.0,1,staticmaps/wflow_glacierareas.map
G_SIfrac = intbl/G_SIfrac.tbl,tbl,0.001,1,staticmaps/wflow_glacierareas.map
The initial glacier volume wflow_glacierstore.map should also be added in the staticmaps folder.
Simplified reservoirs and lakes models are included in the framwork and used by the wflow_sbm, wflow_hbv and
wflow_routing models.
Reservoirs
Simple reservoirs can be included within the kinematic wave routing by supplying two maps, one map with the outlet
of the reservoirs in which each reservoir has a unique id (ReserVoirSimpleLocs), and one map with the extent of
reservoir (ReservoirSimpleAreas). Furthermore a set of lookuptables must be defined linking the reservoir-id’s to
reservoir characteristics:
• ResTargetFullFrac.tbl - Target fraction full (of max storage) for the reservoir: number between 0 and 1
• ResTargetMinFrac.tbl - Target minimum full fraction (of max storage). Number between 0 and 1 < ResTarget-
FullFrac
• ResMaxVolume.tbl - Maximum reservoir storage (above which water is spilled) [m3 ]
• ResDemand.tbl - Minimum (environmental) flow requirement downstream of the reservoir m3 /s
• ResMaxRelease.tbl - Maximum Q that can be released if below spillway [m3 /s]
• ResSimpleArea.tbl - Surface area of the reservoir [m2 ]
By default the reservoirs are not included in the model. To include them put the following lines in the .ini file of the
model.
[modelparameters]
# Add this if you want to model reservoirs
ReserVoirSimpleLocs=staticmaps/wflow_reservoirlocs.map,staticmap,0.0,0
ReservoirSimpleAreas=staticmaps/wflow_reservoirareas.map,staticmap,0.0,0
ResSimpleArea = intbl/ResSimpleArea.tbl,tbl,0,0,staticmaps/wflow_reservoirlocs.map
ResTargetFullFrac=intbl/ResTargetFullFrac.tbl,tbl,0.8,0,staticmaps/wflow_
˓→reservoirlocs.map
ResTargetMinFrac=intbl/ResTargetMinFrac.tbl,tbl,0.4,0,staticmaps/wflow_reservoirlocs.
˓→map
ResMaxVolume=intbl/ResMaxVolume.tbl,tbl,0.0,0,staticmaps/wflow_reservoirlocs.map
ResMaxRelease=intbl/ResMaxRelease.tbl,tbl,1.0,0,staticmaps/wflow_reservoirlocs.map
ResDemand=intbl/ResDemand.tbl,tblmonthlyclim,1.0,0,staticmaps/wflow_reservoirlocs.map
In the above example most values are fixed thoughout the year, only the ResDemand is in given per month of the year.
Natural Lakes
Natural (uncontrolled) lakes can be modelled in wflow using a mass balance approach:
balance is then solved by linearization and iteration or using the Modified Puls Approach from Maniak (Burek et al.,
2013). Storage curves in wflow can either:
• Come from the interpolation of field data linking volume and lake height,
• Be computed from the simple relationship 𝑆 = 𝐴 * 𝐻.
Rating curves in wlow can either:
• Come from the interpolation of field data linking lake outflow and water height,
𝛽
• Be computed from a rating curve of the form 𝑄𝑜𝑢𝑡 = 𝛼 * (𝐻 − 𝐻0 ) , where 𝐻0 is the minimum water level
under which the outflow is zero. Usual values for 𝛽 are 3/2 for a rectangular weir or 2 for a parabolic weir (Bos,
1989).
The Modified Puls Approach is a resolution method of the lake balance that uses an explicit relationship between
storage and outflow. Storage is assumed to be equal to A*H and the rating curve for a parabolic weir (beta =2):
𝐴 √︀
𝑆 = 𝐴 * 𝐻 = 𝐴 * (ℎ + 𝐻0 ) = √ 𝑄 + 𝐴 * 𝐻0
𝛼
𝐴 √︀ 𝑆(𝑡) (𝑃 − 𝐸) * 𝐴 𝐴 * 𝐻0 𝐴 * 𝐻0
√ 𝑄+𝑄= + 𝑄𝑖𝑛 + − = 𝑆𝐼 −
∆𝑡 𝛼 ∆𝑡 ∆𝑡 ∆𝑡 ∆𝑡
Natural lakes can be included within the kinematic wave routing in wflow, by supplying two maps with the locations
of the lakes (one map for the extents, LakeAreas.map, and one for the outlets, LakeLocs.map) and in which each lake
has a unique id. Furthermore a set of lookuptables must be dened linking the lake id’s to lake characteristics:
• LakeArea: surface area of the lakes [m2 ]
• LakeAvgLevel: average lake water level [m], used to reinitiate model states [m].
• LakeThreshold: water level threshold H0 under which outflow is zero [m].
• LakeStorFunc: type of lake storage curve ; 1 for S = AH (default) and 2 for S = f(H) from lake data and
interpolation.
• LakeOutflowFunc: type of lake rating curve ; 1 for Q = f(H) from lake data and interpolation, 2 for general Q =
b(H - H0 )𝑒 and 3 in the case of Puls Approach Q = b(H - H0 )2 (default).
• Lake_b: rating curve coefficient.
• Lake_e: rating curve exponent.
By default, the lakes are not included in the model. To include them, put the following lines in the .ini le of the model:
LakeLocs=staticmaps/wflow_lakelocs.map,staticmap,0.0,0
LakeAreasMap=staticmaps/wflow_lakeareas.map,staticmap,0.0,0
LinkedLakeLocs=intbl/LinkedLakeLocs.tbl,tbl,0,0,staticmaps/wflow_lakelocs.map
LakeStorFunc = intbl/LakeStorFunc.tbl,tbl,1,0,staticmaps/wflow_lakelocs.map
LakeOutflowFunc = intbl/LakeOutflowFunc.tbl,tbl,3,0,staticmaps/wflow_lakelocs.map
LakeArea = intbl/LakeArea.tbl,tbl,1,0,staticmaps/wflow_lakelocs.map
LakeAvgLevel = intbl/LakeAvgLevel.tbl,tbl,1,0,staticmaps/wflow_lakelocs.map
LakeAvgOut = intbl/LakeAvgOut.tbl,tbl,1,0,staticmaps/wflow_lakelocs.map
LakeThreshold = intbl/LakeThreshold.tbl,tbl,0,0,staticmaps/wflow_lakelocs.map
Lake_b = intbl/Lake_b.tbl,tbl,50,0,staticmaps/wflow_lakelocs.map
Lake_e = intbl/Lake_e.tbl,tbl,2.0,0,staticmaps/wflow_lakelocs.map
Additional settings
References
• Burek P., Van der Knijf J.M., Ad de Roo, 2013. LISFLOOD – Distributed Water Balance and flood Simulation
Model – Revised User Manual. DOI: http://dx.doi.org/10.2788/24719.
• Bos M.G., 1989. Discharge measurement structures. Third revised edition, International Institute for Land
Reclamation and Improvement ILRI, Wageningen, The Netherlands.
In addition this library contain a number of hydrological functions that may be used within the wflow models
It contains both the kinematic wave, interception, snow/glaciers and reservoirs/lakes modules.
wflow_funcs.SnowPackHBV(Snow, SnowWater, Precipitation, Temperature, TTI, TT, TTM, Cfmax,
WHC)
HBV Type snowpack modelling using a Temperature degree factor. All correction factors (RFCF and SFCF)
are set to 1. The refreezing efficiency factor is set to 0.05.
Parameters
• Snow –
• SnowWater –
• Precipitation –
• Temperature –
• TTI –
• TT –
• TTM –
The wflow_delwaq module provides a set of functions to construct a delwaq pointer file from a PCRaster local drainage
network. A command-line interface is provide that allows you to create a delwaq model that can be linked to a wflow
model.
The script sets-up a one-layer model (representing the kinematic wave reservoir). Water is labeled according to the
area and flux where it enters the kinematic wave reservoir.
For the script to work a run of the wflow model must be available and a template directory in which the delwaq model
is created should also be available. These are indicated by the -C -R and -D command line options. The -R and -C
options indicate the wflow case and wflow run directories while the -D option indicates the delwaq template directory.
The template used is shown below:
debug/
fixed/
fixed/B2_numsettings.inc
fixed/B4_dispersion.inc
fixed/B4_dispx.inc
fixed/B9_Hisvar.inc
fixed/B9_Mapvar.inc
includes_deltashell/
(continues on next page)
The debug, includes_flow, and includes_deltashell directories are filled by the script. After that delwaq1.exe and
delwaq2.exe programs may be run (the run.bat file shows how this is done):
the script sets up delwaq such that the result for the wflow gauges locations are stored in the deltashell.his file.
The pointer file for delwaq is made using the following information:
1. The wflow_ldd.map files is used to create the internal flow network, it defines the segments and how water flows
between the segments
2. The number of inflows into each segment determines is taken from the sources mapstacks (-S option). Together
these sources should include all the water that enters the kinematic wave reservoir. These are the red and green
arrows in the figure below
3. The delwaq network is generated for the area define in the wflow_catchment map. The included area is define
by all cells were the catchment id in a cel is larger than 1.
T=1
Volume=1
Flow=1 to 2
T=2
Volume=2
Flow=2 to 3
T=3
Volume=3
Flow=3 to 4
T=4
Volume=4
May be zero
The volume.dat file is filled with N+1 steps of volumes of the wflow kinematic wave reservoir. To obtain the needed
lag between the flows and the volumes the volumes are taken from the kinematic wave reservoir one timestep back
(OldKinWaveVolume).
The flow.dat files is filled as follows. For each timestep internal flows (within the kinematic wave reservoir, i.e. flows
from segment to segment) are written first (blue in the layout above). Next the flow into each segment are written.
The following very simple example demonstrated how the pointer file is created. First the pcraster ldd:
;Written by dw_WritePointer
;nr of pointers is: 20
1 3 0 0
2 4 0 0
3 5 0 0
4 6 0 0
5 7 0 0
6 8 0 0
7 9 0 0
8 10 0 0
9 -1 0 0
10 -2 0 0
-3 1 0 0
(continues on next page)
To estimate load of different nutrients to Johor strait two wflow_sbm models have been setup. Next these models
where linked to delwaq as follows:
1. A delwaq segment network similar to the wflow D8 ldd was made
2. The volumes in the delwaq segment are taken from the wflow_sbm kinematic wave volumes
3. For each segment two sources (inflows) are constructed, fast and slow each representing different runoff com-
partments from the wflow model. Fast represents SOF1 , HOF2 and SSSF3 while Slow represent groundwater
flow.
4. Next the flow types are combined with the available land-use classes. As such a Luclass times flowtypes matrix
of constituents is made. Each constituent (e.g. Slow flow of LU class 1) is traced throughout the system. All
constituents are conservative and have a concentration of 1 as they flow in each segement.
5. To check for consistency an Initial water type and a Check water type are introduced. The Initial water will
leave the system gradually after a cold start, the Check water type is added to each flow component and should
be 1 at each location in the system (Mass Balance Check).
The above results in a system in which the different flow types (including the LU type where they have been generated)
can be traced throughout the system. Each each gauge location the discharge and the flow components that make up
the discharge are reported.
By assuming each flow type is an end-member in a mixing model we can add fixed concentration of real parameters
to the flow fractions and multiply those with the concentrations of the end-membesrt modelled concentration at the
gauge locations can be obtained for each timestep.
The figure above shows the flow types in the models used in Singapore and Malaysia. Groundwater flow (both from
completely saturated cell and subcell groundwater flow) makes up the Slow flow that is fed into the delwaq model
while SOF and HOF make up the Fast flow to the delwaq model. In addition the water is also labelled according to the
landuse type of the cell that it flows out off.
The whole procedure was setup in a Delft-FEWS configuration that can run the following steps operationally:
1 SOF: Saturation Overland Flow
2 HOF: Hortonian Overland Flow (or infiltration excess Overland Flow)
3 SSSF: SubSurface Storm Flow. Rapid lateral flow through the top part of the soil profile.
Fig. 28: Figure: Discharge, flow types and resulting total P for a catchment in Malaysia.
The wflow_emwaq module provides a set of functions to create and link a Delft3Delwaq or D-Emission model to a
wflow model. The module allows to both converts static and dynamic data from wflow_sbm to readable structure, flow
and emission data used to set up an emission or a water quality model (later referred to as D-Emission and Delwaq).
It is an extension of the wflow_delwaq module that only handles the surface water for a Delwaq model. In addition,
the script can aggregate results from wflow cells to build an emission/water quality model at the sub-catchment scale
instead. The module has yet only been tested for wflow_sbm but could as well be applied to other wflow models.
The wflow_sbm model is a fully distributed hydrologic model working on a regular grid of cells. Each cell is composed
of several layers (or water buckets) such as open water, the unsaturated store or saturated store of the soil. . . (figure
below) The link between each cell (or direction of the lateral water flow) is then defined according to the local drain
direction map (ldd) which indicates which of the cell neighbours has the lowest elevation.
D-Emission or Delwaq are unstructured models. They are composed of unordered segments. Contrary to a wflow
cell which contains several layers a D-Emission/WAQ segment only represent one layer, called a compartment. This
means that one wflow cell is represented by several segments in D-Emission/WAQ: one for the open water, one for the
unsaturated store, one for the saturated store. . . The direction of the flows in D-Emission/WAQ between each segment
is defined in the pointer file. Contrary to wflow ldd which only needs to define the direction of lateral flows, the pointer
file therefore also needs to indicate the direction of vertical flows (flows between the different layers/compartments of
a same wflow cell). This also means that external flows coming to/out of a wflow cell (for example precipitation from
the atmosphere) are defined in the pointer as flows between a segment and a boundary (for precipitation the boundary
is the atmosphere).
The wflow_emwaq module main goal is then to convert wflow cells and ldd map into D-Emission/WAQ segments,
compartments and pointer and to write the output flows from wflow into new flow files respecting the created
cells/pointer framework.
As the cells grid from wflow_sbm can lead to a very large model (especially the 1km resolution of the global
wflow_sbm model), it is possible with the coupling to aggregate the results from wflow cells to subcatchments. With
this option, a D-Emission/WAQ segment represents then a specific compartment of a wflow subcatchment instead of a
specific compartment for a wflow cell.
Generalities
The wflow_emwaq module is a python script which has a similar structure as other wflow modules. First, a wflow_sbm
model is set up and run normally via the ini file and input data (tbl, staticmaps, inmaps. . . ). Then wflow_emwaq
module is run to convert the outputs of wflow_sbm into several files that serves as inputs to D-Emission/WAQ models
(see scheme below). The set-up of the wflow_emwaq run is also done via an ini file similar to wflow_sbm and via three
csv tables that selects the compartments, fluxes and boundaries that the user wants to include in the D-Emission/WAQ
models. Finally, the links to the different files created by the wflow_emwaq module are written in D-Emission/WAQ
main input file and the emission and water quality model can be run.
In order to set up an emission/water quality model, the user needs to choose which compartments, and which fluxes
he wants to model. To aid that choice, the following figure sums up the different fluxes and the different lay-
ers/compartments of a wflow cell with their corresponding volumes.
For example, for a simple Delwaq run for the surface water with fraction calculation, only the surface water compart-
ment is needed and all the fluxes coming in/out of it as shown in the left part of the following figure. In that case, the
other compartments are turned into boundaries. In addition, as there is already a wflow variable that sums up all the
in/outflows from the surface water (self.Inwater), the scheme can be simplified with just one boundary and flow (right
part of the following figure).
Fig. 33: Compartments and fluxes needed for a Delwaq fraction model (left: with all the fluxes, right: simplified).
The boundaries.csv file, which is not used by the script but for user information, is a table composed of three columns
(Table 2, columns in blue only need to be consistent between the csv files):
• Nr: number of the boundary. This field is not used by the python script of the module but is defined for user
information.
• ID: simplified identifier for the boundary, usually a few letters. The user can choose any names but IDs must be
consistent with the ones in the fluxes.csv and compartments.csv files.
• Name: name of the boundary. This field is not used by the python script of the module but is defined for user
information.
Table 2: Example of the boundaries.csv file for a Delwaq fraction model
The fluxes.csv file is a table composed of nine columns (Table 3, columns in red need to be filled with precise keywords
used in the coupling script, columns in blue only need to be consistent between the csv files, columns in green are
required only for a coupling with an emission model):
• Nr: number of the flux. This field is not used by the python script of the module but is defined for user
information.
Then, contrary to Delwaq, wflow_sbm fluxes are not saved in a flow.dat file that represents exactly the pointer structure
but rather in a hydrology.bin file that only saves some of the fluxes. This means that some wflow_sbm fluxes are
needed just in the pointer or in the hydrology file. In addition, in the hydrology file, an additional flow recorded is the
“TotalFlow” that sums up some of wflow_sbm fluxes to the surface water. These fluxes may already be needed for the
pointer or the hydrology file or just for the “TotalFlow”. In order to tell the coupling code which flux is needed where,
the column EmPointerHyd is added in the fluxes.csv file. This column is filled with specific keywords:
• P: flux only needed for the pointer (for example the flux from the Sewage to the Waste Water Treatment Plant).
• H: flux only needed in the hydrology file.
• T: flux only needed for the calculation of the “TotalFlow” (for example the precipitation falling directly to the
surface runoff compartment self.RunoffOpenWater).
• PH: flux needed for the pointer and the hydrology file.
• PT: flux needed for the pointer and “TotalFlow”.
• HT: flux needed for the hydrology file and “TotalFlow”.
• PHT: flux needed for the pointer, hydrology file and “TotalFlow”.
The coupling can also prepare .inc emission data for D-Emission/WAQ, that are adapted to the wflow schematisation.
In order to enable this option, emissions data must be prepared as wflow PCRaster map files and the links to these
maps written in an emissions.csv file. The emssions.csv file contains 5 columns (Table 4, columns in red need to be
filled with precise keywords used in the coupling script, columns in blue only need to be consistent between the csv
files):
• Nr: number of the emission data.
• Name: name of the emission
• To: ID of the compartment where the emission are located.
• Fileloc: the link to the .map emission file. Path is relative to the wflow model casename.
• AggType: if data need to be aggregated to the subcatchment level, identify the aggregation operation (total,
average, maximum, minimum).
Table 4: Example of the emissions.csv file
If the user wants to aggregate the results from wflow cells to subcatchments, then wflow outputs are not saved as
classical map files (netCDF or PCRaster) but as CSV tables where all the different fluxes and volumes are already
aggregated. To create the CSV tables, wflow_sbm variables are not saved in the outputmaps section of the ini file
but in the different outputcsv sections. Complete settings of these sections are described in the wflow documentation.
For a coupling with water quality and emission modelling, only two types of CSV files need to be saved: wflow
variables that are sampled at the outlet of the subcatchments and wflow variables that are summed up over the entire
subcatchment area.
The corresponding settings for variables summed over the entire catchment area are:
• Type of wflow variables concerned: all the vertical fluxes between compartments and or boundaries, plus the
variables representing the compartments volume/storage.
• samplemap: link to the map of the subcatchments where results should be summed up.
• function: type of function to use for the sampling by subcathcments. For the coupling, the sum is needed and
function is then total. (Other possible wflow_sbm functions are average, minimum, maximum. . . ).
• List of wflow variables to save with the “.csv” extension.
Warning: If the aggregation of wflow_sbm results from cells to subcatchments is used, the wflow variable used
for the surface water compartment is self.KinWaveVolume instead of self.WaterLevel
As for wflow_sbm, wflow_emwaq also has its own ini file wflow_emwaq.ini. The different sections are (see the
template for examples):
• inputcsv: similar to the inputmapstacks section in wflow_sbm.ini, it lists the links to the csv tables. By defaults,
the csv are placed in a new csv folder in the model directory of the wflow case. Required arguments are:
– sepcsv: separator between each field of the csv file. Usually “,” or “;”
– compartments: link to the compartments.csv file.
– boundaries: link to the boundaries.csv file.
– fluxes: link to the fluxes.csv file.
– emissions: link to the emissions.csv file. If no path is specified the coupling won’t produce any emission
data.
• run: similar than in wflow_sbm.ini. Required fields are:
– starttime: beginning of the simulation.
– endtime: end of the simulation.
– timestepsecs: length of the time step in seconds.
– runlengthdetermination: either intervals or steps depending on the format of the timestep.
• model: maps and options to use for the model run.
Type of the results saved by wflow_sbm. Indicate which type of file the coupling should read from. Options are:
– input_type: type of wflow outputs to read. Can be “netcdf” or PCRaster “map” files or “csv” tables. The
csv tables option is the one to use if aggregation of results to subcatchments is chosen.
– netcdfinput: name of the possible NetCDF output file of wflow_sbm is required. Can be left empty if
input_type is either map or csv.
Options for wflow_emwaq run (0 to turn off or 1 to turn on):
– write_ascii: dynamic data for D-Emission/WAQ are saved in binary files. If this option is on, an ASCII
copy will also be created. Default is 0.
– write_structure: if on, structure data for D-Emission/WAQ are produced. Note that this option cn be used
without previsouly running wflow_sbm. Default is 1.
– write_dynamic: if on, dynamic data for D-Emission/WAQ are produced. Default is 1.
– fraction: if on, produce additional structure files used for Delwaq fraction mode. Default is 0.
Like wflow_sbm, the whole coupling process can be run either from the command line or batch file(s). First
wflow_sbm is run. The minimum command line requires:
• The link to the wflow_sbm script.
• -C option stating the name of the wflow case directory.
• -R option stating the name of the directory of wflow_sbm outputs.
Then wflow_emwaq is run. The minimum command line requires:
• The link to the wflow_emwaq script.
• -C option stating the name of the wflow case directory (idem sbm).
• -R option stating the name of the directory containing wflow_sbm outputs (idem sbm).
• -D option stating the name of the directory of wflow_emwaq outputs.
• -c option stating the name of wflow_emwaq ini file.
Additional run options corresponding to the one set up in the model section of the ini file are:
• -i for writing binary and ASCII files.
• -u for writing structure files.
• -y for writing dynamic files.
• -f for writing additional fraction files.
• -e for an emission or water quality coupling.
• -F for writing additional files for FEWS or deltashell GUIs.
• -a for aggregation of results from cells to subcatchments.
Finally, all the files produced by wflow_emwaq need to be included in the main input file from D-Emission/WAQ and
the emission or water quality model can be run. An example of the command lines needed to run both wflow python
scripts is:
activate wflow
python wflow\wflow_sbm.py -C Rhine -R SBM
python wflow\wflow_emwaq.py -C Rhine -R SBM -D Rhine\WAQ -c wflow_emwaq.ini
Note: if only the schematisation is of interest and not the dynamic fluxes, the coupling can also be run without first
running wflow_sbm. In that case, the writing structure files option should be turned on and the writing dynamic files
option turned off.
The wflow_emwaq module produces several folders in the defined run directory. These are the debug folder which
contains additional maps or files produced during the coupling process, the includes_deltashell folder which includes
all the structure files listed in Table 4 and the includes_deltashell folder which includes all the data files listed in Table
5. The type of files produced depend if the coupling is made for an emission model or a water quality model with or
without fraction calculation.
Table 4: List of structure files produced by wflow_emwaq
To create the pointer file needed by the emission/water quality models, the wflow_emwaq scripts first gives a unique
ID number to each of the active cell of the wflow_sbm grid (cells that are in the area modelled). Then the script reads
the compartments.csv table. The unique IDs previously created are then assigned to the first compartment listed in
the table. Then if there are other compartments listed in the table, the unique IDs are copied and shifted by the total
number of cells. Compartments are therefore read one by one and the different segment IDs are created step by step
accordingly. For example, for a wflow model with 9 active grid cells, if the compartments.csv table contains three
lines starting from “Surface Water” to “Unsaturated Store” and finishing with “Saturated Store”, the corresponding
IDs of the D-Emission/WAQ model would be:
Then the fluxes and links to the fluxes are handled. Instead of reading the fluxes list defined in the corresponding
csv table one by one like for the compartments, they are instead read by type starting with the lateral fluxes (equal
name in the From and To column of the fluxes.csv table). The direction of the flow within the segment of the same
compartment is defined with the local drain direction (LDD) map from wflow_sbm. With this map, the downstream
cell ID is identified and the pointer file construction is starting. Back to our example, the first lines of the pointer
would then be:
Finally, the vertical fluxes between compartments and boundaries are treated (names in either the From or To column
of fluxes.csv corresponds to a name in the ID column of compartments.csv). By default, the coupling gives one ID per
boundary type. If the bd_id option is set, IDs for boundaries are defined in the same way as for the compartments (one
ID per boundary per active cell). Numbering starts with an offset corresponding to the number of outflows. Vertical
fluxes between compartment and boundary are also read in the same order of appearance in the fluxes.csv file. For
example, if we have a flux going from the boundary atmosphere to the surface water (precipitation), the corresponding
IDs for the atmosphere boundary and pointer lines, by default, would be (there was already one outflow boundary):
When wflow_sbm outputs are handled for all cells by the coupling, IDs are given to the cells row by row starting
from the top left corner to the bottom right corner of the grid. If wflow results are aggregated by subcatchments IDs
depends on the wflow subcatchment map that is used. Subcatchments ID are then sorted by ascending order rather
than location on the map. For lateral flows, the downstream subcatchment is determined using the downstream cell of
the outlet (corresponding to the wflow gauge map). If we take an example with four subcatchments, the pointer for the
lateral flux (after replacing the outflow by a boundary) will then be:
If there are several compartments, they are then handled in the same way than for cells. The fluxes are also treated in
the same order, first the lateral fluxes, then fluxes between two compartments and finally fluxes with boundaries.
Once the pointer is created, the other structure files are created using a specific function from the coupling script.
These functions usually use inputs coming from the pointer file creation.
For a coupling with Delwaq for each wflow cells, the flow.dat file is constructed in the very same way as the pointer.
Flows are read from lateral, to between 2 compartments to between boundaries and compartments and then in the
order of their definition in the fluxes.csv table. For each timestep, the corresponding wflow netcdf or map output will
be read, and fluxes values saved row by row from the top left corner to the bottom right corner, corresponding to the
numbering of the wflow cells IDs.
For a coupling with D-Emission for each wflow cells, as some fluxes are present in the pointer but not the hydrology
file, another definition file hydrology.inc is created to give the order of the fluxes saved in the hydrology.bin file and the
corresponding segments affected in the pointer file. As the order of the fluxes definition is not important then, fluxes
are saved one by one in the order of their definition in the fluxes.csv table (and not from lateral to vertical fluxes). The
“TotalFlow” is then added at the end. As for a Delwaq coupling, for each timestep, the corresponding wflow netcdf
or map output will be read and fluxes values saved row by row from the top left corner to the bottom right corner,
corresponding to the numbering of the wflow cells IDs.
If the aggregation option is on, the entire aggregated CSV tables of the different wflow variables are read in the order
of the flow definition in the pointer file for Delwaq (lateral then vertical) and in the order of their definition in the
fluxes.csv table for D-Emission. As wflow already sorts the saved aggregated variables by ascending number of the
subcatchment IDs, no restructuration of the variables CSV tables is needed.
Introduction
In order to simplify conversion of an existing model to a reusable, plug-and-play model component, CSDMS has
developed a simple interface called the Basic Model Interface or BMI that model developers are asked to implement.
Recall that in this context an interface is a named set of functions with prescribed function names, argument types and
return types. The BMI functions make the model self-describing and fully controllable by a modeling framework.
See also: http://csdms.colorado.edu/wiki/BMI_Description
This is the first implementation of the BMI for the wflow pcraster/python models
Configuration
Mapping of long_var_name to model variables not yet implemented. The long_var_name should be model for names
for now
Introduction
In order to simplify conversion of an existing model to a reusable, plug-and-play model component, CSDMS has
developed a simple interface called the Basic Model Interface or BMI that model developers are asked to implement.
Recall that in this context an interface is a named set of functions with prescribed function names, argument types and
return types. The BMI functions make the model self-describing and fully controllable by a modeling framework.
See also: http://csdms.colorado.edu/wiki/BMI_Description
The wflow_bmi_combined module implements a class that connects 2 or more python bmi modules and exports those
to the outside as a single bmi model.
• A @ character is used to separate the module from the variable. For example, the variable Flow in module
wflow_hbv becomes wflow_hbv@Flow in the combined model (it was Flow in the single interface)
• The individual models can run in a separate case dir or can be combined into one directory (and share maps that
are identical)
The bmi2runner.py script can be used to run a set of combined models, it is documented seperately.
1.9.3 bmi2runner
Introduction
bmi2runner.py is a simple script that runs two or more wflow modules connection via the BMI interface (the combined
version). A configfile is used to control which models to start as well as the exchange of data between the models.
The config file contains a list of models configured with the name of the wflow modules to run and the ini file that is
used by the model. Furthermore, in the exchanges section the data flows from model to model are configured.
[models]
# module name = path to config of module relative to the dir of this ini file
wflow_sbm=wflow_sbm/wflow_sbm_comb.ini
wflow_routing=wflow_routing/wflow_routing_comb.ini
[exchanges]
# From_model/var -> To_model/var
wflow_sbm@InwaterMM=wflow_routing@IW
To setup a combined model you should first configure and setup the individual models. They can be setup in separate
case directories or they can be merged in one case directory. Each model should have it’s own config/ini file. The
following principles apply when using the bmi2runner script:
• the models are executed in the order they are listed in the models section
• the variables are get/set in the order they appear in the exchanges section
• the script runs explicitly, no iteration is performed
Example
In the examples directory the file bmirunner.ini is present. You can use this to run a combined
wflow_sbm/wflow_routing model. Start this up using the following command (in the examples dir):
bmi2runner.py -c bmirunner.ini
A second example runs wflow_sbm next wflow_routing followed by the wflow_floodmap module:
bmi2runner.py -c bmirunner-floodmap.ini
[models]
wflow_sbm=wflow_rhine_sbm/wflow_sbm.ini
wflow_routing=wflow_routing/wflow_routing_BMI.ini
wflow_floodmap=wflow_routing/wflow_floodmap_BMI.ini
[exchanges]
# From_model.var -> To_model.var
wflow_sbm@InwaterMM=wflow_routing@IW
wflow_routing@WaterLevel=wflow_floodmap@H
In this case the floodmap module uses the same directory as the routing module (but a different config file).
The wflow modelbuilder is a new tool with which you can set up a wflow model in a few simple steps. The default
setup of the wflow modelbuilder is fully based on global data sets. The modelbuilder uses a set of tools called
hydro-engine (https://github.com/openearth/hydro-engine), which are built on top of Google Earth Engine (https://
earthengine.google.com/).
Installation
Once you have downloaded and installed wflow, the modelbuilder is available in this location:
<your_wflow_folder>/Scripts/wtools_py/modelbuilder.py
You can run the modelbuilder script from the command line or in a batch file. The script uses the settings.json file for
the location of the model. To run the modelbuilder script with python, use the following command:
The settings.json file must be present. The modelbuilder comes with a default settings.json file. Its contents look like
this:
{
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"properties": {},
"geometry": {
"type": "Point",
"coordinates": [
7.466239929199218,
50.31565429419649
]
}
}
]
}
--case-path Path where both the template and created case reside(default
˓→is the current directory)
--fews-config-path Path to the Delft-FEWS config directory (to save default FEWS
˓→states) (default=Config)
˓→catchments-intersection, region)(default=catchments-upstream)
Example:
Run this command from the command line or in a batch file, and you will have your model.
The generated model structure looks like this:
data\
inmaps\
instate\
intbl\
mask\
run_default\
(continues on next page)
To run the wlfow model, you need the staticmaps and intbl directories and the wflow_sbm.ini file. Also the inmaps and
the instate directories are needed to run the model, but these are not filled yet. By default, results of your model run
are stored in the run_default directory, and this directory including all its subfolders is required if you run the model
within FEWS.
In the mask folder you will find the mask that is used to clip the model, and the grid definition in FEWS format (in
grid.xml), which you can copy-paste into the Grids.xml file in your FEWS configuration. In the data folder you will
find the data that was used to generate the model, after clipping it from the global data: geojson files for the catchments
and rivers, and raster files for the DEM and the parameter maps.
The wflow_sbm.ini file is the file with configuration settings that is needed to run the wflow-sbm model. This is an
example file – please change the settings in the ini file according to your specific model setup (see wflow_sbm|hbv.ini
file).
Model data
Where does the data come from? This default setup of the wflow modelbuilder is fully based on global data sets.
Below you find the specifications of the global data sets used.
Catchment delineation
The clipping of the global maps is done based on the model area. The model area is based on the HydroBASINS
subcatchments, level 9 (http://hydrosheds.org/page/hydrobasins). The modelbuilder determines within which Hy-
droBASINS subcatchment the coordinates are located that you specified in the settings.json file, and queries all up-
stream catchments as a single or multiple polygons. These subcatchments together define the area of your model. The
data sets described below are all clipped based on this area.
Rivers
For the river network, the HydroSHEDS drainage network is queried as polylines (http://hydrosheds.org/).
Optionally, a local or improved river vector file (shapefile, geojson, etc.) can be provided to the modelbuilder with the
option --river-path. If a local river vector file is specified, this will be used instead of the default global river
file.
DEM
For the elevation data the digital elevation model (DEM) used is SRTM v4, 30m (https://www2.jpl.nasa.gov/srtm/)
Optionally, a local or improved Digital Elevation Model (DEM) can be provided to the modelbuilder with the option
--dem-path. If a local DEM is specified, this will be used instead of the default global DEM.
For land use the 0.5 km MODIS-based Global Land Cover Climatology map by the USGS Land Cover Institute (LCI)
is used (https://landcover.usgs.gov/global_climatology.php). This land cover dataset consists of 17 different classes
for land cover types. The legend for this land cover map is also provided in the template case (and copied to your
wflow model) in data/parameters/lulegend.txt
LAI
LAI (Leaf Area Index) maps for the wflow-sbm model are stored in the staticmaps/clim directory. These are twelve
maps with monthly average LAI, based on combined AVHRR and MODIS data, derived from Liu et al. 2012
[Liu2012], calculated as averages over 1981-2011.
Soil type
A soil map indicating major soil texture types is also downloaded with the modelbuilder (wflow_soil.map), which is
derived from the Harmonized World Soil Database (HWSD) (FAO et al. 2009 [FAO2009]). The legend for this soil
dataset is also provided in the template case in data/parameters/wflow_soil.csv. In the current setup with global data,
this soil map is not used, since all soil-based parameters are specified as rasters. It can however be useful if you want
to differentiate parameters in the intbl directory based on soil type, or if you want add more parameters as .tbl files.
Model parameters
At the moment it is only possible to set up a model with the modelbuilder in the WGS84 coordinate system
(EPSG:4326).
References
1.11.1 2019.1
• Wflow_sbm, redesign of lateral subsurface flow because of a bug that resulted in an overestimation of lateral
subsurface flux:
– The kinematic wave approach is used to route subsurface flow laterally.
– Kinematic wave solution for surface (overland and river) and subsurface flow added to wflow_funcs, and
used by wflow_sbm. Numba (open-source JIT compiler) is used to accelerate these functions. This re-
places the PCRaster kinematic wave solution.
– A seperate kinematic wave reservoir for river (self.RiverRunoff) and overland flow (self.LandRunoff)
– Lateral subsurface flow and overland flow feed into drainage network (channel) based on ratio slopes (slope
upstream cell / (slope upstream cell + slope cell))
– Option to provide land slope and river slope (staticmaps)
– Removed sub-grid runoff generation
– Removed re-infiltration of surface water
• Wflow_hbv uses kinematic wave solution for surface flow available in wflow_funcs (replacement of PCRaster
kinematic wave solution)
• Option to enable iteration (smaller time steps) for kinematic wave solution for surface runoff (wflow_hbv and
wflow_sbm)
• Added hydrological model wflow_w3 to wflow framework, based on Australian Water Resources Assessment
Landscape model (AWRA-L), an improved version compared to wflow_W3RA.
• Added hydrological model wflow_stream to wflow framework (STREAM (Spatial Tools for River Basins and
Environment and Analysis of Management Options)).
• Added experimental version of wflow_sediment to simulate soil erosion and delivery to the river system.
• Added the wflow_emwaq module: provides a set of functions to create and link a Delft3D-WAQ or D-Emission
model to a wflow model.
• Update of lake modelling for wflow_hbv and wflow_sbm (including also Modified Puls Approach) in
wflow_funcs
• Update of glacier model (used by wflow_hbv and wflow_sbm)
• Parameter Manning N of river can be linked to streamorder for wflow_hbv
• setuptools_scm used for version wflow package
1.11.3 2017.01
Note: Several none-backwards compatible changes will be part of this release. Use 2016.03 for older models
1.11.5 2016.03
1.11.6 2016.02
1.11.8 2015.01
1.12.1 Prerequisites
1.12.2 Configuration
˓→bmiModelFactoryConfig.xsd">
<pythonModel>
<pythonPath>../../../wflow_bin</pythonPath>
<moduleName>wflow.wflow_bmi</moduleName>
<className>wflowbmi_csdms</className>
<!-- You must give an absolute path to the py.exe of the binary distribution -
˓→->
</pythonModel>
<modelTemplateDirectory>../../combined</modelTemplateDirectory>
<modelConfigFile>wflow_wr3a.ini</modelConfigFile>
<bmiModelForcingsConfig>
<dataObject>
<className>org.openda.exchange.dataobjects.NetcdfDataObject</className>
<file>Precipitationmapstack.nc</file>
<arg>true</arg>
<arg>false</arg>
</dataObject>
</bmiModelForcingsConfig>
</bmiModelFactoryConfig>
TWO
REFERENCES
• Köhler, L., Mulligan, M., Schellekens, J., Schmid, S. and Tobón, C.: Final Technical Report DFID-FRP Project
no. R7991 Hydrological impacts of converting tropical montane cloud forest to pasture, withinitial reference to
northern Costa Rica. 2006.
189
CHAPTER
THREE
Arnal, L., 2014. An intercomparison of flood forecasting models for the Meuse River basin (MSc Thesis). Vrije
Universiteit, Amsterdam.
Azadeh Karami Fard, 2015. Modeling runoff of an Ethiopian catchment with WFLOW (MSc thesis). Vrije Univer-
siteit, Amsterdam.
de Boer-Euser, T., Bouaziz, L., De Niel, J., Brauer, C., Dewals, B., Drogue, G., Fenicia, F., Grelier, B., Nossent,
J., Pereira, F., Savenije, H., Thirel, G., Willems, P., 2017. Looking beyond general metrics for model comparison –
lessons from an international model intercomparison study. Hydrol. Earth Syst. Sci. 21, 423–440. doi:10.5194/hess-
21-423-2017
Emerton, R.E., Stephens, E.M., Pappenberger, F., Pagano, T.C., Weerts, A.H., Wood, A.W., Salamon, P., Brown, J.D.,
Hjerdt, N., Donnelly, C., Baugh, C.A., Cloke, H.L., 2016. Continental and global scale flood forecasting systems.
WIREs Water 3, 391–418. doi:10.1002/wat2.1137
Hally, A., Caumont, O., Garrote, L., Richard, E., Weerts, A., Delogu, F., Fiori, E., Rebora, N., Parodi, A., Mihalović,
A., Ivković, M., Dekić, L., van Verseveld, W., Nuissier, O., Ducrocq, V., D’Agostino, D., Galizia, A., Danovaro, E.,
Clematis, A., 2015. Hydrometeorological multi-model ensemble simulations of the 4 November 2011 flash flood
event in Genoa, Italy, in the framework of the DRIHM project. Nat. Hazards Earth Syst. Sci. 15, 537–555.
doi:10.5194/nhess-15-537-2015
Hassaballah, K., Mohamed, Y., Uhlenbrook, S., Biro, K., 2017. Analysis of streamflow response to land use land
cover changes using satellite data and hydrological modelling: case study of Dinder and Rahad tributaries of the Blue
Nile. Hydrol. Earth Syst. Sci. Discuss. 2017, 1–22. doi:10.5194/hess-2017-128
Jeuken, A., Bouaziz, L., Corzo, G., Alfonso, L., 2016. Analyzing Needs for Climate Change Adaptation in the
Magdalena River Basin in Colombia, in: Filho, W.L., Musa, H., Cavan, G., O’Hare, P., Seixas, J. (Eds.), Climate
Change Adaptation, Resilience and Hazards, Climate Change Management. Springer International Publishing, pp.
329–344.
López López, P., Wanders, N., Schellekens, J., Renzullo, L.J., Sutanudjaja, E.H., Bierkens, M.F.P., 2016. Improved
large-scale hydrological modelling through the assimilation of streamflow and downscaled satellite soil moisture ob-
servations. Hydrol. Earth Syst. Sci. 20, 3059–3076. doi:10.5194/hess-20-3059-2016
Maat, W.H., 2015. Simulating discharges and forecasting floods using a conceptual rainfall-runoff model for the
Bolivian Mamoré basin (MSc thesis). University of Twente, Enschede.
Research paper: HYDROLOGIC MODELING OF PRINCIPAL SUB-BASINS OF THE MAGDALENA-
CAUCA LARGE BASIN USING WFLOW MODEL [WWW Document], n.d. . ResearchGate. URL https:
//www.researchgate.net/publication/280293861_HYDROLOGIC_MODELING_OF_PRINCIPAL_SUB-BASINS_
OF_THE_MAGDALENA-CAUCA_LARGE_BASIN_USING_WFLOW_MODEL (accessed 4.4.17).
Tangdamrongsub, N., Steele-Dunne, S.C., Gunter, B.C., Ditmar, P.G., Weerts, A.H., 2015. Data assimilation of
GRACE terrestrial water storage estimates into a regional hydrological model of the Rhine River basin. Hydrol. Earth
Syst. Sci. 19, 2079–2100. doi:10.5194/hess-19-2079-2015
190
Tretjakova, D., 2015. Investigating the effect of using fully-distributed model and data assimilation on the performance
of hydrological forecasting in the Karasu catchment, Turkey (MSc thesis). Wageningen University.
Wang, X., Zhang, J., Babovic, V., 2016. Improving real-time forecasting of water quality indicators with
combination of process-based models and data assimilation technique. Ecological Indicators 66, 428–439.
doi:10.1016/j.ecolind.2016.02.016
191
CHAPTER
FOUR
TODO
192
BIBLIOGRAPHY
[Dai2013] Dai, Y., W. Shangguan, Q. Duan, B. Liu, S. Fu, G. Niu, 2013. Development of a China Dataset of Soil
Hydraulic Parameters Using Pedotransfer Functions for Land Surface Modeling. Journal of Hydromete-
orology, 14:869-887.
[VanDijk2001] Dijk, A.I.J.M. van and L.A. Bruijnzeel (2001), Modelling rainfall interception by vegetation of vari-
able density using an adapted analytical model. Part 1. Model description. Journal of Hydrology 247,
230-238.
[FAO2009] FAO/IIASA/ISRIC/ISS-CAS/JRC, 2009. Harmonized World Soil Database (version 1.1). FAO, Rome,
Italy and IIASA, Laxenburg, Austria.
[Liu1998] Liu, S. (1998), Estimation of rainfall storage capacity in the canopies of cypress wetlands and slash pine
uplands in North-Central Florida. Journal of Hydrology 207, 32-41.
[Liu2012] Liu, Y., R. Liu, and J. M. Chen (2012), Retrospective retrieval of long-term consistent global leaf
area index (1981–2011) from combined AVHRR and MODIS data. J. Geophys. Res., 117, G04003,
doi:10.1029/2012JG002084.
[Shangguan2014] Shangguan, W., Dai, Y., Duan, Q., Liu, B. and Yuan, H., 2014. A Global Soil Data Set for Earth
System Modeling. Journal of Advances in Modeling Earth Systems, 6: 249-263.
193
INDEX
C I
idtoid() (in module wflow_lib), 132
checkerboard() (in module wflow_lib), 129
classify() (in module wflow_lib), 129
configget() (in module wflow_fit), 127
K
configget() (in module wflow_lib), 129 kin_wave() (in module wflow_funcs), 150
configget() (in module wflow_prepare_step1), 17 kinematic_wave() (in module wflow_funcs), 150
configget() (in module wflow_prepare_step2), 18 kinematic_wave_ssf() (in module wflow_funcs),
configsection() (in module wflow_lib), 130 150
configset() (in module wflow_lib), 130
cutMapById() (in module wflow_lib), 130
L
lddcreate_save() (in module wflow_lib), 132
D log2xml() (in module wflow_adapt), 125
derive_HAND() (in module wflow_lib), 130 lookupResFunc() (in module wflow_funcs), 150
detdrainlength() (in module wflow_lib), 131 lookupResRegMatr() (in module wflow_funcs), 150
detdrainwidth() (in module wflow_lib), 131
M
E main() (in module wflow_adapt), 125
estimate_iterations_kin_wave() (in module main() (in module wflow_prepare_step1), 18
wflow_funcs), 150 main() (in module wflow_prepare_step2), 18
mapstackxml() (in module wflow_adapt), 125
F module
wflow_adapt, 124
find_outlet() (in module wflow_lib), 131
wflow_fit, 127
wflow_flood, 77
G wflow_funcs, 149
getcols() (in module wflow_lib), 132 wflow_lib, 128
getEndTimefromRuninfo() (in module wflow_prepare_step1, 17
wflow_adapt), 124 wflow_prepare_step2, 18
getgridparams() (in module wflow_lib), 132 multVarWithPar() (wflow_fit.wfmodel_fit_API
method), 127
194
N usage() (in module wflow_prepare_step1), 18
naturalLake() (in module wflow_funcs), 150 usage() (in module wflow_prepare_step2), 18
O W
OpenConf() (in module wflow_prepare_step1), 17 wflow_adapt
OpenConf() (in module wflow_prepare_step2), 18 module, 124
wflow_fit
P module, 127
pixml_state_updateTime() (in module wflow_flood
wflow_adapt), 125 module, 77
pixml_totss() (in module wflow_adapt), 125 wflow_funcs
pixml_totss_dates() (in module wflow_adapt), module, 149
125 wflow_lib
points_to_map() (in module wflow_lib), 133 module, 128
propagate_downstream() (in module wflow_prepare_step1
wflow_funcs), 151 module, 17
pt_flow_in_river() (in module wflow_lib), 133 wflow_prepare_step2
module, 18
R wfmodel_fit_API (class in wflow_fit), 127
rainfall_interception_gash() (in module writeMap() (in module wflow_lib), 136
wflow_funcs), 151
rainfall_interception_hbv() (in module Z
wflow_funcs), 151 zipFiles() (in module wflow_lib), 136
rainfall_interception_modrut() (in module
wflow_funcs), 151
readMap() (in module wflow_lib), 133
resamplemaps() (in module wflow_prepare_step2),
18
riverlength() (in module wflow_lib), 133
run() (wflow_fit.wfmodel_fit_API method), 128
S
savemaps() (wflow_fit.wfmodel_fit_API method), 128
sCurve() (in module wflow_funcs), 151
sCurve() (in module wflow_lib), 133
sCurveSlope() (in module wflow_lib), 134
set_dd() (in module wflow_funcs), 152
setlogger() (in module wflow_adapt), 125
shutdown() (wflow_fit.wfmodel_fit_API method), 128
simplereservoir() (in module wflow_funcs), 152
snaptomap() (in module wflow_lib), 134
SnowPackHBV() (in module wflow_funcs), 149
subcatch() (in module wflow_lib), 134
subcatch_order_a() (in module wflow_lib), 134
subcatch_order_b() (in module wflow_lib), 135
subcatch_stream() (in module wflow_lib), 135
sum_list_cover() (in module wflow_lib), 135
T
tss_topixml() (in module wflow_adapt), 125
U
upscale_riverlength() (in module wflow_lib),
135
Index 195