Tutos-V1 3
Tutos-V1 3
Release 1.3
1 Disk space 1
3 Environment variables 5
4 Download 7
4.1 Downloading CROCO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
4.2 Getting other codes (coupling) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
7 Test Cases 19
7.1 BASIN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
7.2 Set up you own test case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
10 Compiling 39
10.1 cppdefs.h . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
10.2 param.h . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
10.3 jobcomp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
10.4 Compilation options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
10.5 Tips in case of errors during compilation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
i
14 Nesting Tutorial 59
15 Adding Rivers 63
15.1 Constant flow and concentration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
15.2 Variable flow read in a netCDF file and constant concentration . . . . . . . . . . . . . . . . . . 64
15.3 Variable flow and variable concentration from a netCDF file . . . . . . . . . . . . . . . . . . . 66
15.4 Using a nest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
16 Adding tides 69
16.1 Pre-processing (Matlab) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
16.2 Compiling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
16.3 Running . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
17 Visualization (Matlab) 73
18 Visualization (Python) 75
18.1 Setup your Miniconda environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
18.2 Croco_visu directory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
18.3 Launch visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
18.4 How to customize for your own history files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
18.5 How to add new variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
19 NBQ Tutorial 89
19.1 Some important points about Large-Eddy Simulations (LES) . . . . . . . . . . . . . . . . . . . . 89
19.2 KH_INST Test Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
19.3 Set up your own NBQ configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
19.4 NBQ OPTIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
19.5 Appendix : some words on CROCO-NBQ kernel . . . . . . . . . . . . . . . . . . . . . . . . . . 95
20 Coupling tutorial 97
20.1 Summary of steps for coupling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
20.2 Compiling in coupled mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
20.3 Simple CROCO-TOY coupled example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
20.4 Advanced coupling tutorial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
23 XIOS 139
24 Tips 143
24.1 Tips in case of errors during compilation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
24.2 TIPS for errors at runntime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
24.3 Analytical forcing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
ii
Bibliography 175
iii
iv
CHAPTER
ONE
DISK SPACE
CROCO and CROCO_TOOLS source codes require less than 500 MB of disk space. Climatological datasets,
provided for regional configuration, require about 18 GB of disk space.
1
Croco Tutorials, Release 1.3
TWO
CROCO uses Fortran routines as well as cpp-keys. The I/O are in netcdf. It therefore requires to have:
• a C compiler
• a Fortran compiler
• a Netcdf library
• MPI libraries and compilers if running in parallel
CROCO_TOOLS use Matlab, and Python scripts.
3
Croco Tutorials, Release 1.3
THREE
ENVIRONMENT VARIABLES
A few environment variables for compilers and libraries should be declared to avoid issues when compiling and
running CROCO. If you are using Intel compilers for instance, you should declare the followings (in your .bashrc
file):
export CC=icc
export FC=ifort
export F90=ifort
export F77=ifort
For Netcdf, you should also declare your netcdf path, and add it to the PATH and LD_LIBRARY_PATH environ-
ment variables. Here is an example:
export NETCDF=$HOME/softs/netcdf
export PATH=$NETCDF/bin::${PATH}
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}::${NETCDF}/lib
Note: Common errors associated with Netcdf are usually solved by checking that Netcdf is correctly declared in
your LD_LIBRARY_PATH
5
Croco Tutorials, Release 1.3
FOUR
DOWNLOAD
CROCO source code and croco_tools stable releases are available in the Download section of https://www.
croco-ocean.org/croco-project/
Both are available for download in a compressed tar files. To install them:
cp analytical.F_fix17juin2019 croco-v1.0/OCEAN/analytical.F
cp get_psource_ts.F_fix04september2018 croco-v1.0/OCEAN/get_psource_ts.F
cp read_inp.F_fix30july2018 croco-v1.0/OCEAN/read_inp.F
cp check_domain_runoff.m_fix17juin2019 croco_tools-v1.0/Rivers/check_domain_runoff.
˓→m
cp make_runoff.m_fix17juin2019 croco_tools-v1.0/Rivers/make_runoff.m
7
Croco Tutorials, Release 1.3
For the Matlab toolbox, additional packages as m_map, air-sea, etc, are required (named UTILITIES).
Note: UTILITIES are now provided directly within croco_tools, and do not need to be downloaded separately
Otherwise, (if the provided files are not working on your platform) they are avaiable for download here: https:
//www.croco-ocean.org/download/utilities/, or the user can download them on the original repositories:
• the NetCDF Matlab Mex file is needed to read and write into NetCDF files and it can be found at the web
location: http://mexcdf.sourceforge.net/.
• The LoadDAP Matlab Mex file is used to download data from OpenDAP servers for inter-annual and fore-
cast simulations. It can be found at the web location: http://www.opendap.org/download/ml-structs.html.
The Matlab LoadDAP Mex file provides a way to read any OpenDAP-accessible data into Matlab. Note
that the LibDAP library must be installed on your system before installing LoadDAP. Details can be found
at the web location: http://www.opendap.org.
Finally, forcing datasets are required (initial, surface, and boundary conditions). Climatological global datasets
are provided here: https://www.croco-ocean.org/download/datasets/
croco_tools also provide pre-processing scripts for the use of interannual datasets as:
• CFSR, ERA-interim, . . . for atmospheric forcing
• SODA, ECCO2, MERCATOR, . . . for the ocean boundaries and initialization
• OASIS coupler
To use CROCO in coupled mode (coupling with atmosphere and/or waves), OASIS3-MCT version 3 is required.
Note: Older versions of OASIS do not include all the necessary functions as grid generation in parallel mode. If
you want to use an older version, you need to create your grids.nc, masks.nc, and areas.nc files first, and comment
the call to cpl_prism_grids in cpl_prism_define.F
• WW3
WaveWatch3 is now hosted on github on a public repository: https://github.com/NOAA-EMC/WW3
You can thus clone the repository:
• WRF
Currently the distributed version of WRF does not include coupling with waves, and some other functionalities
we have recently implemented. We therefore suggest to use the fork including modifications for coupling with
WW3 and CROCO through the OASIS coupler, but note that this is a development version. . . https://github.com/
wrf-croco/WRF/tree/WRF-CROCO
8 Chapter 4. Download
Croco Tutorials, Release 1.3
You need to use the same WPS version than the WRF version you use. Currenlty the WRF version on the WRF-
CROCO fork is WRF4.2.1. You should therefore use the WPS 4.2 version. To do so, with git you can move to the
appropriate tag:
cd WPS
git checkout tags/v4.2
10 Chapter 4. Download
CHAPTER
FIVE
5.1 Architecture
5.2 Contents
CROCO and its tools are distributed in separate directories croco and croco_tools.
5.2.1 croco
11
Croco Tutorials, Release 1.3
5.2.2 croco_tools
CROCO preprocessing tools have been primarily developed under Matlab software by IRD researchers (former
Roms_tools). Note: These tools have been made to build easily regional configurations using climatological
data. To use interannual data, some facilities are available (NCEP, CFSR, QuickScat data for atmospheric forcing,
SODA and ECCO for lateral boundaries). However, to use other data, you will need to adapt the scripts. All
utilities/toolbox requested for matlab crocotools programs are provided within UTILITIES directory, or can be
downloaded here: http://www.croco-ocean.org/download/utilities/.
Scripts
Aforc_CFSR
Scripts for the recovery of surface forcing data
(based on CFSR reanalysis) for
interannual simulations
Aforc_ECMWF
Scripts for the recovery of surface forcing data
(based on ECMWF-ERAinterim simulations) for
interannual simulations
Aforc_ERA5
Scripts for the recovery of surface forcing data
(based on ECMWF-ERA5 simulations) for
interannual simulations
Aforc_NCEP
Scripts for the recovery of surface forcing data
(based on NCEP2 reanalysis) for
interannual simulations
UTILITIES
DATASETS
CARS2009
CSIRO Atlas of Regional Seas database. Annual,
seasonal and monthly climatology
for temperature, salinity, nitrate, phosphate and
oxygen
SST_pathfinder
SST global monthly climatology at a finer resolution
(9.28 km) than COADS05, computed
from AVHRR-Pathfinder observations from 1985 to
1997 (Casey and Cornillon, 1999)
WOA2009
World Ocean Atlas 2009 global datase
References list: http:
//www.nodc.noaa.gov/OC5/WOA09/pubwoa09.html
WOAPISCES
A global dataset for biogeochemical PISCES data
(annual and seasonal climatology).
References are :
Fe and DOC : Aumont et Bopp, 2006
Si, O2, NO3, PO4 from WOA2005,
DIC and Alkalinity come from Goyet et al.
5.2. Contents 15
Croco Tutorials, Release 1.3
SIX
1. Compilation CROCO needs to be compiled for each configuration (grid, MPI decomposition, paramteri-
zations. . . ). The files that need to be edited are (available in croco/OCEAN directory):
cppdefs.h
CPP-keys* allowing to select configuration,
numerical schemes, parameterizations,
forcing and boundary conditions
* CROCO extensively uses the C preprocessor
(cpp) during compilation to
replace code statements, insert files into the
code, and select relevant
parts of the code depending on its directives.
param.h
Grid settings: the values of the model grid size
are:
LLm0 points in the X direction
MMm0 points in the Y direction
N vertical levels
For realistic regional cases, LLm0 and MMm0
are given by running make_grid.m,
and N is defined in crocotools_param.m
param.h also contains: Parallelisation
settings
Tides, Wetting-Drying, Point sources, Floats,
Stations specifications
2. Namelist CROCO namelist input file croco.in contains several configurations settings such as: the time
stepping, the vertical coordinate settings, the I/O settings and paths, some parameters for the model, . . .
It has to be edited before running. It is available in croco/OCEAN directory for regional configurations,
and in croco/TEST_CASES directory for test cases.
3. Input files CROCO needs the following input files to run:
• CROCO grid file: croco_grd.nc
• CROCO surface forcing file: croco_frc.nc (or croco_blk.nc)
• CROCO vertical boundary conditions: croco_bry.nc (or croco_clim.nc)
• CROCO initial conditions: croco_ini.nc
17
Croco Tutorials, Release 1.3
They can be created using the Preprocessing croco_tools, see dedicated tutorial. These files are even-
tually not mandatory in test cases for which the useful settings are defined analytically within the
CROCO code.
4. Run CROCO can be run in serial or parallel mode. See the run tutorial.
5. Outputs CROCO usual outputs are:
• CROCO restart file: croco_rst.nc
• CROCO instantaneous output file: croco_his.nc
• CROCO averaged output file: croco_avg.nc
• CROCO log file: croco.log if you have defined the LOGFILE key in cppdefs.h : # define
LOGFILE
Other output files can be generated depending on the settings provided in croco.in.
SEVEN
TEST CASES
7.1 BASIN
# undef REGIONAL
You can also explore the CPP options selected for BASIN case.
You can check the BASIN settings in param.h.
4. Edit the compilation script jobcomp:
# set source, compilation and run directories
#
SOURCE=~/croco/croco/OCEAN
SCRDIR=./Compile
RUNDIR=`pwd`
ROOT_DIR=$SOURCE/..
#
# determine operating system
#
OS=`uname`
echo "OPERATING SYSTEM IS: $OS"
#
# compiler options
#
FC=$FC
#
# set MPI directories if needed
#
MPIF90=$MPIF90
MPIDIR=$(dirname $(dirname $(which $MPIF90) ))
MPILIB="-L$MPIDIR/lib -lmpi -limf -lm"
(continues on next page)
19
Croco Tutorials, Release 1.3
• OR by using a batch script (e.g. PBS) to launch the model (in clusters): For
DATARMOR:
cp $CROCO_DIR/job_comp_datarmor.pbs .
qsub job_comp_datarmor.pbs
• How are the vertical levels distributed (look for the cpp key NEW_S_COORD)?
• What are the initial dynamical conditions (see both cppdefs.h and croco.in)?
• What do the air-sea exchanges look like?
10. Re-run this case in parallel on 4 CPUs:
To run in parallel, your first need to edit cppdefs.h, param.h, and to recompile.
• Edit cppdefs.h:
# define MPI
• Edit param.h:
#ifdef MPI
integer NP_XI, NP_ETA, NNODES
parameter (NP_XI=2, NP_ETA=2, NNODES=NP_XI*NP_ETA)
parameter (NPP=1)
parameter (NSUB_X=1, NSUB_E=1)
• Recompile.
• Run the model in parallel:
– By using classical launch command (on individual computers):
where NPROCS is the number of CPUs you want to allocate. mpirun -np NPROCS is a
typical mpi command, but it may be adjusted to your MPI compiler and machine settings.
– OR by using a batch script (e.g. PBS) to launch the model (in clusters), examples are
provided:
cp ~/croco/croco_tools/job_croco_mpi.pbs .
Edit job_croco_mpi.pbs according to your MPI settings in param.h and launch the
run:
qsub job_croco_mpi.pbs
Warning: NPROCS needs to be consistent to what you indicated in param.h during compilation
Example: set up a convection test case: test case that mimic the winter convection happening in the North-
Western Mediterranean sea
1. Create a configuration directory:
mkdir ~/CONFIGS/CONVECTION
cd ~/CONFIGS/CONVECTION
cp ~/croco/croco/OCEAN/cppdefs.h .
cp ~/croco/croco/OCEAN/param.h .
cp ~/croco/croco/OCEAN/jobcomp .
cp ~/croco/croco/OCEAN/croco.in .
3. Edit cppdefs.h, param.h, and croco.in for your new CONVECTION case:
• Add a dedicated key for this test case CONVECTION (in cppdefs.h)
• Set up a flat bottom ; 2500 m deep (variable depth in ana_grid.F and follow what is
perform under the key BASIN for instance)
• Set up your grid: 1000x1000x200 grid points (respectively in xi, eta and vertical directions)
(parameters LLm0, MMm0 and N in param.h)
• Specify a length and width of 50km in both directions (xi, eta) (variables
Length_XI,Length_ETA in ana_grid.F)
• Set up an almost cold start with velocity component fields set to a white noise (see in
ana_initial.F what is performed for other test cases and fill in arrays u,v) around
0.1 mm/s
• Set up the initial ssh fields to zero (arrays zeta in ana_initial.F)
• Set up the initial stratification (i.e. the temperature and salinity fields) (in ana_initial.
F: array t)
• Set up the wind stress forcing (svstr, sustr in analytical.F ; you may follow what is set
for INNERSHELF ; not necessary)
• Set the permanent heat surface flux (stflx= -500 w/m2 (-500/rho0*Cp) in analytical.F
in subroutine ana_stflux_tile)
Warning: In cppdefs.h define your own cpp key CONVECTION which might be a clone of
the key BASIN ; in case we add the salinity (with respect to the BASIN case) do not forget
to add the keys ANA_SSFLUX and ANA_BSFLUX .
Warning: In croco.in in case we add (with respect to the BASIN case) the salinity do not
forget to modify the number of tracers written 2*T and the number of Akt (2*.1.0e-6)
Warning: In croco.in adjust the time step and ndtfast to reach the stability
#
# compiler options
#
FC=$FC
#
(continues on next page)
where NPROCS is the number of CPUs you want to allocate. mpirun -np NPROCS
is a typical mpi command, but it may be adjusted to your MPI compiler and machine
settings.
• OR by using a batch script (e.g. PBS) to launch the model (in clusters), examples are provided:
cp ~/croco/croco_tools/job_croco_mpi.pbs .
qsub job_croco_mpi.pbs
#define GLS_MIXING
• In croco.in add this lines for GLS history and averages fields
real*4 r2,RR2
! Set kinematic surface heat flux [degC m/s] at horizontal
! RHO-points.
!
r2= 10**2
ic = LLm0/2.
jc = MMm0/2.
do j=JstrR,JendR
do i=IstrR,IendR
RR2 = (i+iminmpi-ic)*(i+iminmpi-ic)+(jminmpi-
˓→jc)*(jminmpi-jc)
enddo
enddo
EIGHT
To prepare your configuration working directory, you can use the script create_config.bash provided in
CROCO sources:
cp ~/croco/croco/create_config.bash ~/CONFIGS/.
˓→#==========================================================================================
# Configuration name
# ------------------
MY_CONFIG_NAME=BENGUELA_LR
Run create_config.bash:
./create_config.bash
# For pre-processing:
cp ~/croco/croco_tools/crocotools_param.m .
cp ~/croco/croco_tools/start.m .
25
Croco Tutorials, Release 1.3
# For running
cp ~/croco/croco/OCEAN/croco.in .
In your configuration working directory, you need at least the following files:
• For preprocessing:
– crocotools_param.m
– start.m
• For compiling:
– param.h
– cppdefs.h
– jobcomb
• For running:
– croco.in
NINE
CROCO preprocessing tools have been developed under Matlab software by IRD researchers (former
Roms_tools). Note: These tools have been made to build easily regional configurations using climatological
data. To use interannual data, some facilities are available (NCEP, CFSR, QuickScat data for atmospheric forcing,
SODA and ECCO for lateral boundaries). However, to use other data, you will need to adapt the scripts. All
utilities/toolbox requested for matlab crocotools programs are provided within the UTILITIES directory, or can
be downloaded here: http://www.croco-ocean.org/download/utilities/
27
Croco Tutorials, Release 1.3
Aforc_CFSR
Scripts for the recovery of surface forcing data
(based on CFSR reanalysis) for
interannual simulations
Aforc_ECMWF
Scripts for the recovery of surface forcing data
(based on ECMWF-ERAinterim simulations) for
interannual simulations
Aforc_ERA5
Scripts for the recovery of surface forcing data
(based on ECMWF-ERA5 simulations) for
interannual simulations
Aforc_NCEP
Scripts for the recovery of surface forcing data
(based on NCEP2 reanalysis) for
interannual simulations
• 2 scripts are used to set-up your Matlab environment and your configuration settings:
• start.m: has to be launched at the beginning of any matlab session to set the path to utilities
and croco tools routines. Edit mypath and myutilpath
• crocotools_param.m: defines all the parameters and paths needed to build the grid, forcing
and boundary files. Edit the different sections.
Note: In the croco_tools toolbox, the native Matlab Netcdf library is not used. A dedicated
Netcdf library is provided and used. Its path is added to your Matlab environment through the the
start.m script.
First we will start by preparing surface and boundary conditions from climatological datasets. Those datasets can
be downloaded on CROCO website:
:: https://www.croco-ocean.org/download/datasets/
CARS2009
CSIRO Atlas of Regional Seas database. Annual,
seasonal and monthly climatology
for temperature, salinity, nitrate, phosphate and
oxygen
SST_pathfinder
SST global monthly climatology at a finer resolution
(9.28 km) than COADS05, computed
from AVHRR-Pathfinder observations from 1985 to
1997 (Casey and Cornillon, 1999)
WOA2009
World Ocean Atlas 2009 global datase
References list: http:
//www.nodc.noaa.gov/OC5/WOA09/pubwoa09.html
WOAPISCES
A global dataset for biogeochemical PISCES data
(annual and seasonal climatology).
References are :
Fe and DOC : Aumont et Bopp, 2006
Si, O2, NO3, PO4 from WOA2005,
DIC and Alkalinity come from Goyet et al.
1. First you may need to edit start.m, which contains the path to all useful croco_tools Matlab scripts:
Note: You can use these env variables in matlab by using getenv('ENVVAR'), exam-
ple: if you have a $tools environment variables for croco_tools, you can write in start.m:
tools_path=[getenv('tools') '/'];
2. Then edit crocotools_param.m, which is the namelist file for Matlab pre-processing:
crocotools_param.m is separated into several sections:
The first section is already set for BENGUELA_LR configuration, so you just need to change the
second section: directory names:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% 2 - Generic file and directory names
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% CROCOTOOLS directory
%
CROCOTOOLS_dir = ['~/croco/croco_tools/'];
%
% Run directory
%
RUN_dir=[pwd,'/'];
%
% CROCO input netcdf files directory
%
CROCO_files_dir=[RUN_dir,'CROCO_FILES/'];
%
% Global data directory (etopo, coads, datasets download from ftp, etc..)
%
DATADIR=['~/DATA/DATASETS_CROCOTOOLS/'];
%
% Forcing data directory (ncep, quikscat, datasets download with opendap,
˓→etc..)
%
FORC_DATA_DIR = ['~/DATA/'];
Note: The crocotools_param.m is called at the beginning of all Preprocessing script. You do
not have to launch it independently.
Note: All the pre-processing scripts used for climatological forcing are in the
Preprocessing_tools directory
matlab
start
make_grid
During the grid generation process, the question “Do you want to use editmask ? y,[n]” is asked. The
default answer is n (for no). If the answer is y (for yes), editmask, the graphic interface developed by
A.Y.Shcherbina, will be launched to manually edit the mask. Otherwise the mask is generated from
the unfiltered topography data. A procedure prevents the existence of isolated land (or sea) points.
Finally, a figure illustrates the obtained bottom topography. Note that at his low resolution (1/3°), the
topography has been strongly smoothed.
Build the atmospheric forcing: 2 options are available:
• create a forcing file with wind stress (zonal and meridional components), surface net heat flux,
surface freshwater flux (E-P), solar shortwave radiation, SST, SSS, surface net sensitivity to
SST (used for heat flux correction dQdSST for nudging towards model SST and model SSS)
• or create a bulk file which will be read during the run to perform bulk parameterization of
the fluxes using COAMPS or Fairall 2003 formulation. This bulk file contains: surface air
temperature, relative humidity, precipitation rate, wind speed at 10m, net outgoing longwave
radiation, downward longwave radiation, shortwave radiation, surface wind speed (zonal and
meridional components). It also contains surface wind stress (zonal and meridional compo-
nents), but it is not requested and used in the model (except for specific debugging work).
The bulk formulation computes its own wind stress.
make_bulk
or:
make_forcing
The settings relative to surface forcing are in section 3 of crocotools_param.m. In the case of
climatological forcing, the variables are cycled. You can see that here, for the sake of simplicity, we
are running the model on a repeating climatological year of 360 days.
A few figures illustrate the wind stress vectors and norm at 4 different periods of the year.
Note: make_bulk creates a forcing file that will be used with the cpp key BULK_FLUX,
while make_forcing creates a forcing file containing wind stress directly and will be used when
undefined BULK_FLUX. This second option is relevant if your atmospheric forcing comes from
an atmospheric model with sufficient output frequency, or/and if your are comparing forced and cou-
pled runs. Otherwise it is suggested to use make_bulk.
make_bry
Note: make_bry requests that you have previously run make_forcing to compute Ekman forcing
at the surface.
or:
make_clim
Note: make_clim interpolates the oceanic forcing fields over the whole domain: only boundary
points + the 10 next points are actually used for sponge + nudging. Advantage: sponge + nudging
layers at the boundaries, Disadvantage: large amount of unused data.
make_bry interpolates the oceanic forcing fields at the boundary points only. Advantage: light
files (useful for long simulations), Disadvantage: no nudging layers (only a sponge layer for smooth
transition between the boundaries and the interior values).
make_ini
4. You can look at your generated input files in CROCO_FILES directory: You should have:
croco_grd.nc
croco_ini.nc
croco_blk.nc # or croco_frc.nc
croco_bry.nc # or croco_clm.nc
5. Summary to create a simple configuration from climatology files: In Matlab, execute the following:
start
make_grid
make_forcing
make_bulk
make_bry # or make_clim
make_ini
croco_grd.nc
croco_frc.nc (or croco_blk.nc)
croco_bry.nc (or croco_clim.nc)
croco_ini.nc
Dedicated scripts for interannual pre-processing can be found for the different forcing datasets in:
Aforc_CFSR
Scripts for the recovery of surface forcing data
(based on CFSR reanalysis) for
interannual simulations
Aforc_ECMWF
Scripts for the recovery of surface forcing data
(based on ECMWF-ERAinterim simulations) for
interannual simulations
Aforc_ERA5
Scripts for the recovery of surface forcing data
(based on ECMWF-ERA5 simulations) for
interannual simulations
Aforc_NCEP
Scripts for the recovery of surface forcing data
(based on NCEP2 reanalysis) for
interannual simulations
1. Edit crocotools_param.m First section should already be set if you have completed the previous
tutorial.
In the second section, check the path to forcing data directory:
% 2 - Generic file and directory names
In section 4, select only ini and bry (but no clim files, set: makeclim = 0;) to avoid too long
pre-processing, and as it is the most usual set up:
% initial/boundary data options (1 = process)
% (used in make_clim, make_biol, make_bry,
% make_OGCM.m and make_OGCM_frcst.m)
%
(continues on next page)
Note: An important aspect is the definition of time and especially the choice of a time origin. The
origin of time Yorig should be kept the same for all the preprocessing and postprocessing steps.
% ...
2. Then you can run the Matlab pre-processing for these interannual forcing: You should already have
you grid set up. Otherwise, run make_grid
To build your interannual atmospheric forcing, the useful script is make_CFSR
To build your interannual ocean forcing, he useful script is make_OGCM
start
make_CFSR
make_OGCM
Warning:
As this pluri-month preprocessing can be longer and uses more CPU ressources, you may need to submit it as a
cp ~/croco/croco_tools/example_job_prepro_matlab.pbs .
croco_blk_CFSR_Y????M?.nc
croco_bry_SODA_Y????M?.nc
croco_ini_SODA_Y????M?.nc
TEN
COMPILING
cppdefs.h
CPP-keys* allowing to select configuration,
numerical schemes, parameterizations,
forcing and boundary conditions
* CROCO extensively uses the C preprocessor (cpp)
during compilation to
replace code statements, insert files into the code,
and select relevant
parts of the code depending on its directives.
param.h
Grid settings: the values of the model grid size are:
LLm0 points in the X direction
MMm0 points in the Y direction
N vertical levels
For realistic regional cases, LLm0 and MMm0 are
given by running make_grid.m,
and N is defined in crocotools_param.m
param.h also contains: Parallelisation settings
Tides, Wetting-Drying, Point sources, Floats,
Stations specifications
Warning: CROCO needs to be compiled for each configuration (domain, coupled, uncoupled, parameteriza-
tions. . . ), i.e., each time you change something in cppdefs.h or param.h
39
Croco Tutorials, Release 1.3
10.1 cppdefs.h
2. Then, in cppdefs.h, you have one section for each case. Let’s explore the REGIONAL case section:
• Then, you can set parallelization option (you can set define MPI if you want to run in parallel):
/* Parallelization */
# undef OPENMP
# undef MPI
• Then, you can set I/O options (XIOS server, netcdf 4 parallel option, NB: we will have a dedicated
tutorial on XIOS):
/* I/O server */
# undef XIOS
• Non-hydrostatic option:
/* Non-hydrostatic option */
# undef NBQ
• Nesting settings:
/* Nesting */
# undef AGRIF
# undef AGRIF_2WAY
/* Wave-current interactions */
# undef MRL_WCI
• Managing open boundaries (you can choose to close one of the boundaries, useful in coastal
cases):
• Activating applications:
/* Applications */
# undef BIOLOGY
# undef FLOATS
# undef STATIONS
# undef PASSIVE_TRACER
# undef SEDIMENT
# undef BBL
• Defining a dedicated log file for CROCO standard output (default is undef but you can define
LOGFILE to facilitate the reading of model output, particularly useful for coupled simulations):
Warning: Keep undef LOGFILE is you use Plurimonth run scritps as:
run_croco_inter.bash because it already re-direct the CROCO output, and check
it. . .
10.1. cppdefs.h 41
Croco Tutorials, Release 1.3
Warning: By default no reference time is used, and time is referred to the beginning of the
simulation only
/* Calendar */
# undef USE_CALENDAR
3. Then you have detailed settings (you can find a description of all cpp keys in Contents and Architecture sectio
• In grid configuration:
/* Grid configuration */
# define CURVGRID
# define SPHERICAL
# define MASKING
# undef WET_DRY
# define NEW_S_COORD
Warning: you should check that the vertical coordinate setting NEW_S_COORD is in ade-
quation with your pre-processing setting (vtransform=2 in crocotools_param.m)
/* Surface Forcing */
# undef BULK_FLUX
/* Surface Forcing */
# define BULK_FLUX
• Then, you have to set your lateral forcing according to your pre-processing as well:
– If you have prepared croco_clm.nc file (using make_clim.m):
/* Lateral Forcing */
# define CLIMATOLOGY
and
# undef FRC_BRY
/* Lateral Forcing */
# undef CLIMATOLOGY
and
# define FRC_BRY
10.2 param.h
#ifdef MPI
integer NP_XI, NP_ETA, NNODES
parameter (NP_XI=1, NP_ETA=4, NNODES=NP_XI*NP_ETA)
parameter (NPP=1)
parameter (NSUB_X=1, NSUB_E=1)
#elif defined OPENMP
parameter (NPP=4)
• In the case of OpenMP parallelization, NPP is the number of cpu used in the computation
• In the case of MPI parallelization, it is equal to to NNODES.
• AUTOTILING (implemented by L. Debreu): cpp-key that enable to compute the optimum sub-
domains partition in terms of computation time.
10.3 jobcomp
Now that your input files are set up, you can proceed to compilation:
Here we assume that you have set a few environment variables for compilers and libraries. Here is an example
with Intel compilers and a netcdf library located in $HOME/softs/netcdf. Adapt these to your own settings (in
your .bashrc file):
10.2. param.h 43
Croco Tutorials, Release 1.3
# compilers
export CC=icc
export FC=ifort
export F90=ifort
export F77=ifort
export MPIF90=mpiifort
# netcdf library
export NETCDF=$HOME/softs/netcdf
export PATH=$NETCDF/bin::${PATH}
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}::${NETCDF}/lib
#
# compiler options
#
FC=$FC
#
# set MPI directories if needed
#
MPIF90=$MPIF90
MPIDIR=$(dirname $(dirname $(which $MPIF90) ))
MPILIB="-L$MPIDIR/lib -lmpi -limf -lm"
MPIINC="-I$MPIDIR/include"
• .o object files
A very summarized information on compilation options is given here. For further details, search information
on the web, or with your cluster assistance team. Useful informations can also be found on this page: http:
//www.idris.fr/jean-zay/cpu/jean-zay-cpu-comp_options.html
• Optimization options:
– -O0, -O1, -O2, -O3, -fast : optimization level. -O0 is no optimization, use it for debug. -O3 and
-fast are more agressive optimization options that can lead to problems in reproducibility of your
run (especially it is better to avoid -fast).
– -xCORE-AVX2 : vectorization option, very agressive optimization => non-reproducibility of CROCO
– -fno-alias, -no-fma, -ip : other optimization options, commonly used
– -ftz: set to 0 denormal very small numbers. It is set by default with -O1, -O2, -O3 (can be a
problem in calculation precision)
• Debug options: -O0 -g -debug -fpe-all=0 -no-ftz -traceback -check all
-fbacktrace -fbounds-check -finit-real=nan -finit-integer=8888
• Precision and writing options:
– -fp-model precise: important to have good precision and reproducibility of your calculations
– -assume byterecl: way of writing: byte instead of bit
– -convert big_endian: way of writing binaries (important for avoiding huge negative numbers)
– -i4, -r8``: way of writing integers and reals (important also for reproducibility between different
clusters)
– -72: specifies that the statement field of each fixed-form source line ends at column 72.
– -mcmodel=medium -shared-intel : do not limit memory to 2Go for data (useful for writing
large output files)
In case of strange errors during compilation (e.g. “catastrophic error: could not find . . . ”), try one of these
solutions:
• check your home space is not full ;-)
• check your paths to compilers and libraries (especially Netcdf library)
• check that you have the good permissions, and check that your executable files (configure, make. . . ) do are
executable
• check that your shell scripts headers are correct or add them if necessary (e.g. for bash: #!/bin/bash)
• try to exit/log out the machine, log in back, clean and restart compilation
Errors and tips related to netcdf library:
• with netcdf 4.3.3.1: need to add the following compilation flag for all models: -mt_mpi
The error associated to a missing -mt_mpi flag is of this type: ”
/opt/intel//impi/4.1.1.036/intel64/lib/libmpi_mt.so.4: could not read symbols: Bad value
“
• with netcdf 4.1.3: do NOT add -mt_mpi flag
export LD_LIBRARY_PATH=YOUR_HDF5_DIR/lib:$LD_LIBRARY_PATH
• with netcdf 4, if you use the library splitted in 2: C part and Fortran part, you need to place links to C library
before links to Fortran library and need to put both path in this same order in your LD_LIBRARY_PATH
In case of ‘segmentation fault’ error:
• try to allocate more memory with “unlimited -s unlimited”
• try to launch the compilation as a job (batch) with more allocated memory
ELEVEN
To run the model, you need to have completed pre-processing (for realistic cases) and compilation phases. In your
working directory you need to have:
• For an idealized simulation (e.g. test cases):
# in CROCO_FILES:
croco_grd.nc # grid file
croco_bdy.nc or croco_clm.nc # lateral boundary condition file
croco_frc.nc or croco_blk.nc # surface forcing file
croco_ini.nc # initial condition file
You first need to set all time, I/O, and different parameters in the CROCO namelist file: croco.in.
CROCO namelist file croco.in is set by default for the BENGUELA_LR case. So you should have nothing to
change.
The detail of all croco.in sections can be found here: croco.in
However you can check some settings:
• Time stepping:
NTIMES: number of time steps dt[sec]: baroclinic time step NDTFAST: number of barotropic
time steps in one baroclinic time step)
Note: Your time steps should be set according to the stability constraints:
– Barotropic mode
∆𝑡 √︀
𝑔𝐻 ≤ 0.89
∆𝑥
47
Croco Tutorials, Release 1.3
Note that considering an Arakawa C-grid divides the theoretical stability limit by a
factor of 2.
So for instance for a maximum depth of 5000 m and a resolution of 30 km:
0.89∆𝑥
∆𝑡 ≤ √
2. 𝑔𝐻
∆𝑡 ≤ 60𝑠
– 3D advection
With 60 barotropic time steps in one baroclinic time step, this results in a baroclinic
time step of:
∆𝑡 ≤ 3600𝑠
You can check that this time step does not violate your CFL condition for your advec-
tion scheme. Typical CFL values for with Croco time-stepping algorithm are
• By default no reference time is used, and time is referred to the beginning of the simulation only using
NTIMES. If you want to define the start and stop of the model by dates, you first need to edit cppdefs.h,
define this key, and recompile the model:
#define USE_CALENDAR
As we are running a climatological simulation, this is not very relevant (as the model is cycled
on a idealized 360-days period). This is more useful for interannual simulations.
• Check the paths to your input files (they should be properly set by default):
grid: filename
CROCO_FILES/croco_grd.nc
forcing: filename
CROCO_FILES/croco_frc.nc
bulk_forcing: filename
CROCO_FILES/croco_blk.nc
climatology: filename
CROCO_FILES/croco_clm.nc
boundary: filename
CROCO_FILES/croco_bry.nc
• Indicate if you are starting from an initial or restart file and its path:
NRREC: Switch to indicate start or re-start from a previous solution. NRREC is the time index of
the initial or restart NetCDF file assigned for initialization.
– If NRREC=1 you are starting from an initial file.
– If NRREC=X with X a positive number, you are starting from the Xth time
record in the restart file.
– If NRREC is negative (say NRREC=-1), the model will start from the most recent time
record. That is, the initialization record is assigned internally.
• Indicate the frequency of restart files, and their paths:
F T T F T F F F T T
˓→ T T T T T T 10*T
gls_averages: TKE GLS Lscale
T T T
./croco croco.in
Where croco is your executable compiled with all your chosen options and parameterizations
and croco.in is your namelist file for croco.
• To run the BENGUELA_LR simulation in parallel (if you have compiled CROCO with #define MPI):
– By using classical launch command (on individual computers):
where NPROCS is the number of CPUs you want to allocate. mpirun -np NPROCS is a
typical mpi command, but it may be adjusted to your MPI compiler and machine settings.
– OR by using a batch script (e.g. PBS) to launch the model (in clusters), examples are
provided:
cp ~/croco/croco/SCRIPTS/example_job_croco_mpi.pbs .
qsub example_job_croco_mpi.pbs
croco.log contains the standard output of your run (informations on your settings, input files, evolution of the
time stepping). croco.log is also useful when your run blows up to search for the error.
You can explore your model outputs (croco_his.nc, croco_avg.nc) using different frameworks (ncview, ferret, etc).
In the croco_tools, a matlab interface is offered to explore your data: croco_gui, as well as a Python interface
croco_pyvisu. These are explored in other tutorials.
• Have a quick look at the results:
ncview croco_his.nc
TWELVE
Now that you have successfully run the default configuration, you can try running another configuration:
BENGUELA_VHR.
1. Create a new configuration directory for BENGUELA_VHR
2. As for the previous configuration, edit the paths in start.m and crocotools_param.m (or copy
start.m and crocotools_param.m from BENGUELA_LR)
3. Make the appropriate changes in crocotools_param.m to increase the resolution to 1/12°
4. Re-run preprocessing for this new configuration (grid, bulk, forcing, bry, ini)
5. Make the appropriate changes in cppdefs.h (define BENGUELA_VHR, MPI, BULK_FLUX, FRC_BRY,
undef CLIMATOLOGY), and param.h for running BENGUELA_VHR in parallel on 16 CPUs
6. As for the previous configuration, edit the paths in jobcomp (or copy jobcomp from BENGUELA_LR).
And re-compile the model
7. Make the appropriate changes in croco.in: change the time step!
8. As in the previous configuration, copy the batch job:
cp ~/croco/croco/SCRIPTS/example_job_croco_mpi.pbs .
Edit it (notably change the number of CPUs used), and run the model:
qsub example_job_croco_mpi.pbs
9. Test questions:
• On how much CPUs could you run the model (max # of CPUs)?
53
Croco Tutorials, Release 1.3
THIRTEEN
Before running you should prepare your interannual inputs files following the Interannual Preprocessing tutorial.
To run a plurimonth simulation, we provide the following scripts in ~/croco/croco/SCRIPTS/
Plurimonths_scripts:
• run_croco.bash: Plurimonth run with climatological forcing
• run_croco_inter.bash: Plurimonth run with interannual forcing
These scripts:
• get the grid, the forcing, the initial and the boundary files
• run the model for 1 month
• store the output files in a specific form: e.g. croco_avg_Y????M?.nc
• replace the initial file by the restart file (croco_rst.nc) which has been generated at the end of the month
• re-launch the model for next month
A dedicated namelist input file is also requested and provided ~/croco/croco/OCEAN/croco_inter.in
All these files are already copied to your configuration directory if you have used create_config.bash.
Otherwise, copy them from the source directory to your configuration directory.
1. Edit run_croco_inter.bash: Paths should already be correct.
Number of MPI CPUs and command for running:
# command for running the mode : ./ for sequential job, mpirun -np NBPROCS
˓→for mpi run
Type of forcings:
Names of forcings:
55
Croco Tutorials, Release 1.3
# Atmospheric surface forcing dataset used for the bulk formula (NCEP)
#
ATMOS_BULK=CFSR
#
# Atmospheric surface forcing dataset used for the wind stress (NCEP,
˓→QSCAT)
#
ATMOS_FRC=CFSR
#
# Oceanic boundary and initial dataset (SODA, ECCO,...)
#
OGCM=SODA
#
NFAST=60
#
NY_START=2005
NY_END=2005
NM_START=1
NM_END=3
cp ~/croco/croco/SCRIPTS/example_job_run_croco_inter.pbs .
qsub example_job_run_croco_inter.pbs
croco_his_Y2000M1.nc
croco_his_Y2000M2.nc
croco_his_Y2000M3.nc
croco_avg_Y2000M1.nc
croco_avg_Y2000M2.nc
croco_avg_Y2000M3.nc
croco_rst.nc
Warning: If you have an error while you run did not BLOW UP, maybe it is because you have
define LOGFILE in your cppdefs.h. For using run_croco_inter.in it should be
undef.
Instead of pre-processing your atmospheric bulk forcing, you can use online interpolation of atmospheric bulk
forcing.
To do so:
1. Your atmospheric files need to be in a format readable by CROCO: At the moment the following forc-
ing are implemented for online interpolation:
• CFSR data pre-formatted using the script Process_CFSR_files_for_CROCO.sh available
in croco_tools
• ERAI data pre-formatted using reformat_ECMWF.m (used in make_ECMWF.m in the
croco_tools)
• AROME data formatted in Meteo France framework
# define ONLINE
# ifdef ONLINE
# undef AROME
# undef ERA_ECMWF
# endif
Note: for ONLINE interpolation, default is CFSR format. AROME and ERA_ECMWF are also available
by defining the cpp-keys.
3. Re-compile the model First copy your old executable to keep it, then re-compile:
cp croco croco.bck
./jobcomp > jobcomp.log.online
mkdir DATA/CFSR_Benguela_LR/
#ln -s ~/DATA/METEOROLOGICAL_FORCINGS/CFSR/BENGUELA/CROCO_format/*2005*.nc
˓→DATA/CFSR_Benguela_LR/.
cp ~/DATA/METEOROLOGICAL_FORCINGS/CFSR/BENGUELA/CROCO_format/*2005*.nc
˓→DATA/CFSR_Benguela_LR/.
qsub example_job_run_croco_inter.sh
Note: In case of errors while using ONLINE, it is probably associated to time issues: check the time in
your CFSR input files, and check your time origin Yorig.
FOURTEEN
NESTING TUTORIAL
nestgui
59
Croco Tutorials, Release 1.3
Warning: Be aware that the mask interpolation from the parent grid to the child grid is
not optimal close to corners. Parent/Child boundaries should be placed where the mask is
showing a straight coastline. A warning will be given during the interpolation procedure if
this is not the case.
5. ( If you want to change the topography input file for the child domain, click new child topo, choose
your new input topo file and edit n-band which is the number of grid points on which you will connect the
parent and child topography )
Click 2. Interp child to create the child grid. It generates the child grid file. Before, you
should select if you are using a new topography (New child topo button) for the child grid
or if you are just interpolating the parent topography on the child grid. In the first case, you should
defines what topography file will be used (e.g. ~/Roms_tools/Topo/etopo2.nc or another dataset).
You should also define if you want the volume of the child grid to match the volume of the parent
close to the parent/child boundaries (Match volume button, it should be “on” by default). You
should also define the r-factor (Beckmann and Haidvogel, 1993) for topography smoothing
(“r-factor”, 0.25 is safe) and the number of points to connect the child topography to the parent
topography (n-band, it follows the relation hnew = 𝛼. hnew + (1 - 𝛼).hparent , where 𝛼 is
going from 0 to 1 in “n-band” points from the parent/child boundaries). You should also select
the child minimum depth (Hmin, it should be lower or equal to the parent minimum depth), the
maximum depth at the coast (Hmax coast), the number of selective hanning filter passes for
the deep regions (n filter deep” and the number of final hanning filter passes (n filter
final).
6. Click 3. Interp forcing or 3. Interp bulk to interpolate the forcing or bulk file on the
child grid. It interpolates the parent surface forcing on the child grid. Select the parent forcing file
to be interpolated (e.g. Run_BENGUELA_LR/CROCO_FILES/croco_frc.nc). The child forcing
file croco_frc.nc.1 will be created. The parent surface fluxes are interpolated on the child grid.
You can use Interp bulk if you are using a bulk formula. In this case, the parent bulk file (e.g.
Run_BENGUELA_LR/CROCO_FILES/croco_blk.nc) will be interpolated on the child grid.
( If you have changed the topography, Click Vertical interpolations)
7. Click 4. Interp initial or Interp restart to create initial or restart file. It interpolates
parent initial conditions on the child grid. Select the parent initial file (e.g. Run_BENGUELA_LR/
CROCO_FILES/croco_ini.nc). The child initial file croco_ini.nc.1 will be created. If the
topographies are different between the parent and the child grids, the child initial conditions are ver-
tically re-interpolated. In this case you should check if the options vertical corrections and
extrapolations are selected. It is preferable to always use these options. If there are parent bio-
logical fields in the initial files, they can be processed automatically, we have to define the type of biological
models: either NChlPZD or N2ChlPZD2, then click on the Biol button, either BioEBUS, then click on
the Bioebus, either PISCES biogeochemical model, then click on the Pisces button. The fields needed
for the initialization of these biological model will be processed. For information, in the case of NPZD-
type (NChlPZD or N2ChlPZD2) model, there are 5 additional fields, in the case of BIOEBUS, there are 8
additional fields and in the case of PISCES biogeochemical model, there is 8 more fields.
8. Click 5. Create croco.in to create croco.in file for child domain
9. Click Create AGRIF_FixedGrids.in to create input file for AGRIF
Note: Interp clim button can be used to create a climatology file (i.e. boundary conditions)
for the child to domain, to test the child domain alone or to compare 1-way online nested run and
offline nested run.
CROCO_FILES/croco_grd.nc.1
CROCO_FILES/croco_frc.nc.1 (or croco_blk.nc.1)
CROCO_FILES/croco_ini.nc.1
croco.in.1
AGRIF_FixedGrids.in
11. Once the input files had been build, you need to compile the model in nesting mode. You need to define
AGRIF in cppdefs.h and re-compile.
12. You will then be able to launch croco as usual. It will run as an individual binary, with an internal loop on
the number of grids. The child grid will use the *.1 files, this suffix will also be added to the output file of
the nest. You can also define more than one child grid.
qsub job_croco_mpi.pbs
61
Croco Tutorials, Release 1.3
FIFTEEN
ADDING RIVERS
If you want to include rivers in your simulation domain, there are several variables to define as:
• the number of rivers: Nsrc
• the position of the rivers on the model grid: Isrc and Jsrc
• the zonal or meridional axis of the river flow: Dsrc
• if flow (and concentration) is constant, the flow rate of the river (in m3/s): Qbar (positive or negative)
• if flow (and concentration) is variable, and read from a netCDF file, the direction of the flow qbardir :
– 1 for west-east / south-north
– -1 for east-west / north-south
• the type of tracer advected by the river: Lsrc
• the value/concentration: Tsrc
And re-compile.
Then in the croco.in file
psource: Nsrc Isrc Jsrc Dsrc Qbar [m3/s] Lsrc Tsrc
2
3 54 1 200. T T 20. 15.
3 40 0 200. T T 20. 15.
where Nsrc=2 is the number of rivers processed, then each line describes a river. Let’s describe the parameter for
river #1:
• Isrc=3, Jsrc=54 are the i, j indices where the river is positioned
• Dsrc=1 indicates the orientation (here meridional => along V direction)
• 200 is the runoff flow value in m3/s, oriented to the east
• T T are true/false indications for reading or not the following variables (here temperature and salinity)
• 20 and 15 are respectively the temperature and salinity of the river. You can edit these parameters.
Warning: The sources points must be placed on U or V points on the C-grid and not on rho-points
63
Croco Tutorials, Release 1.3
qsub job_croco_mpi.pbs
Instead of using a constant flow, you can use variable flow. For that you need read it from a netcdf file. First define
the dedicated cpp-key in cppdefs.h
#define PSOURCE_NCFILE
Note: RUNOFF_DAI is a global monthly runoff climatology containing the 925 first rivers over the world, from
Dai and Trenberth, 2000
After asking you some specifications for each detected river in your domain, for the selected rivers:
• It will compute the right location on the croco_grid regarding the direction and orientation you defined
• It will create the river forcing netCDF file croco_runoff.nc containing the various river flow time series.
To do so, in CROCO_TOOLS, edit make_runoff.m and define the following flags:
clim_run=1
psource_ncfile_ts=0;
For the BENGUELA test case, you will have 2 rivers detected, Orange and Doring. We recommend to define
them as zonal (0) and oriented from east to west (-1). It will give you the lines to enter in the croco.in file in the
psource_ncfile section.
psource_ncfile: Nsrc Isrc Jsrc Dsrc qbardir Lsrc Tsrc runoff file name
CROCO_FILES/croco_runoff.nc
2
25 34 0 -1 30*T 20 15
31 19 0 -1 30*T 20 15
where Nsrc=2 is the number of rivers, then each line describe a river. Let’s describe the parameter for the river #1
To run CROCO with a variable concentration of river tracers, you need to define the following cpp-key in cppdefs.h
#define PSOURCE_NCFILE_TS
You also need to prepare your netcdf input file. Using the CROCO_TOOLS: edit make_runoff.m and change
the following flags:
psource_ncfile_ts=1;
if psource_ncfile_ts
psource_ncfile_ts_auto=1 ;
psource_ncfile_ts_manual=0;
end
After asking you some specifications of each detected river in your domain, for the selected rivers, in addition to
river flow as in previous section, it will also put the tracers concentration (temp,salt, no3, et . . . ) time series into
the river forcing netCDF file croco_runoff.nc
psource_ncfile: Nsrc Isrc Jsrc Dsrc qbardir Lsrc Tsrc runoff file name
CROCO_FILES/croco_runoff.nc
2
25 34 0 -1 30*T 16.0387 25.0368
30 19 0 -1 30*T 16.1390 25.1136
Warning: The Tsrc value reported in croco.in are the annual-mean tracer values, the are just for information.
The real tracer concentration (Tsrc) are read in the runoff netCDF file created.
The above procedure can be applied to a nested grid. For this, edit make_runoff and change the gridlevel
variable to the adhoc grid level.
%Choose the grid level into which you ant to set up the runoffs
gridlevel=1
if ( gridlevel == 0 )
% -> Parent / zoom #O
grdname = [CROCO_files_dir,'croco_grd.nc'];
rivname = [CROCO_files_dir,'croco_runoff.nc']
clmname = [CROCO_files_dir,'croco_clm.nc']; % <- climato file for runoff
(continues on next page)
end
croco_runoff.nc.1
Note: The runoff has a default vertical profile defined in CROCO as an exponential vertical distribution of
velocity. It is in analytical.F, subroutine ana_psource if you need to change it.
SIXTEEN
ADDING TIDES
Using the method described by Flather (1976), CROCO is able to propagate the different tidal constituents from
its lateral boundaries.
To do so, you will need to add the tidal components to the forcing file, and define the following cpp keys TIDES,
SSH_TIDES and UV_TIDES and recompile the model using jobcomp. To work correctly, the model should use
the characteristic method open boundary radiation scheme (cpp key OBC_M2CHARACT defined).
Warning: To get a clean signal you need to provide harmonic components from both tide elevation and tide
velocity. In case you don’t have velocity harmonics (not defined UV_TIDES) a set of reduced equation is
available to compute velocity from SSH (OBC_REDUCED_PHYSICS)
%%%%%%%%%%%%%%%%%%%%%
%
% 5-Parameters for tidal forcing
%
%%%%%%%%%%%%%%%%%%%%%
%
% TPXO file name (TPXO7)
%
tidename=[CROCOTOOLS_dir,'TPXO7/TPXO7.nc'];
%
% Number of tides component to process
%
Ntides=10;
%
% Chose order from the rank in the TPXO file :
% "M2 S2 N2 K2 K1 O1 P1 Q1 Mf Mm"
% " 1 2 3 4 5 6 7 8 9 10"
%
tidalrank=[1 2 3 4 5 6 7 8 9 10];
%
% Compare with tidegauge observations
%
lon0=18.37;
lat0=-33.91; % Cape Town location
Z0=1; % Mean depth of the tidegauge in Cape Town
start
make_tides
69
Croco Tutorials, Release 1.3
16.2 Compiling
2. Check/Edit param.h:
Warning: The number of tide components must be coherent with the one defined in
crocotools_param.m
16.3 Running
qsub job_croco_mpi.pbs
16.3. Running 71
Croco Tutorials, Release 1.3
SEVENTEEN
VISUALIZATION (MATLAB)
The croco_gui utility has been developped under Matlab software to visualize CROCO outputs.
In Matlab:
start
croco_gui
A window pops up, asking for a CROCO history NetCDF file (see screen captions below). You should select
croco_his.nc (history file) or croco_avg.nc (average file) and click “open”.
The main window appears, variables can be selected to obtain an image such as Figure below. On the left side, the
upper box gives the available CROCO variable names and the lower box presents the variables derived from the
CROCO model outputs :
• Ke : Horizontal slice of kinetic energy
• Rho : Horizontal slice of density using the non-linear equation of state for seawater of Jackett and Mc-
Dougall (1995)
73
Croco Tutorials, Release 1.3
EIGHTEEN
VISUALIZATION (PYTHON)
Croco_visu is a tool written in python to visualize history files genarated by the Croco model.
• Download and install miniconda: download Miniconda from the Conda website. See the documentation for
installation.
• Put the path in your shell environment file (ex:.cshrc, .bashrc)
source path_to_your_miniconda/etc/profile.d/conda.csh
To start croco_visu:
75
Croco Tutorials, Release 1.3
First, you have to choose a variable through the Croco Variables. . . menu, which list all the 2D/3D variables of
the history file.
Warning: Each time you type something in an input text box, you must validate the input with the Enter key.
You can type a longitude/latitude in the input longitude/latitude box (default is the mean longitude/latitude)
Now you can click on the Level/Depth Section button and a new window appears.
In this new window, you can only plot the current variable at the current level/depth. If you want another
variable, or another level/depth, you have to choose first another variable or another level/depth and click again
on the Level/Depth Section button. Then you will have another new window.
• to change the current time, you can type a new time in the text box or use the arrows
To zoom, you have to first click the Zoom button and after select the region to zoom.
To translate the plot, you have to first click the Pan button and then move the plot with the left mouse
button.
The Home button is to go back to the default view.
The Save Plot button will open a new window for you to save the current plot.
• to create animations
You have first to choose the start time and the end time in the two input text boxes. Then you can
click on the Animation button to start the animation. You can abort the animation with the Abort
button and if you select the Save Anim button, your animation is saved in a sub-directory Figures_.
You can choose new limits for the colorbar (return to validate the input) or you can go back to the
default colorbar with the Reset Color button.
You can show the contours of the topography by clicking on the Topo button (on/off) and you can
change the number of contours shown in the text input box (return to validate the input, default is
10).
At the right bottom corner of the window, you have the coordinates of the cursor.
On the main window, the two others buttons Longitude Section and Latitude Section will open the same kind of
window than the Level/Depth Section but at a given longitude or latitude.
The two last buttons of the main window Time Series and Vertical Profile create new windows to plot curves.
Both Time Series and Vertical Profile have the same possibilities.
• Zoom a part of the curve: you must first select the Zoom button and then select the region to zoom.
• Pan the curve: you must first select the Pan button and then translate the curve.
• Home to go back to the default view
• Save Plot : when you click on the Save Plot button, a popup window is opened for you to choose the name
of the file.
The time series is plotted at the current level/depth, longitude and latitude. The vertical profile is plotted at the
current longitude/latitude.
You also have the coordinates of the cursor at the right bottom of the window.
The only file you have to change is the file croco_wrapper.py. You have two files as examples in the repository:
1. croco_wrapper.py.benguela for the Benguela test case, where history files are created through the classic
method.
2. croco_wrapper.py.moz where the history files are created through XIOS.
Choose the right one to start
cp croco_wrapper.py.benguela croco_wrapper.py
keymap_coordinates = {
'lon_rho': 'lon_r',
'lat_rho': 'lat_r',
'lon_u': 'lon_u',
'lat_u': 'lat_u',
'lon_v': 'lon_v',
'lat_v': 'lat_v',
'scrum_time': 'time'
}
keymap_variables = {
'zeta': 'ssh',
'u': 'u',
(continues on next page)
keymap_metrics = {
'pm': 'dx_r',
'pn': 'dy_r',
'theta_s': 'theta_s',
'theta_b': 'theta_b',
'Vtransform': 'scoord',
'hc': 'hc',
'h': 'h',
'f': 'f'
}
keymap_masks = {
'mask_rho': 'mask_r'
}
In the main window, you have another menu called Derived Variables. . . , which contains calculated variables,
derived from the base fields found in the history file.
• zeta_k
𝜕𝑣/𝜕𝑥−𝜕𝑢/𝜕𝑦
zetak is given by 𝑓
• dtdz
𝜕𝑇
dtdz in given by 𝜕𝑧
• log(Ri)
√︁
𝑁 −𝑔 𝜕𝑟ℎ𝑜
Ri is given by (𝜕𝑢/𝜕𝑧)−(𝜕𝑣/𝜕𝑧) with 𝑁 = 𝑟ℎ𝑜0 * 𝜕𝑧
def list_of_derived(self):
''' List of calculated variables implemented '''
keys = []
keys.append('pv_ijk')
keys.append('zeta_k')
keys.append('dtdz')
keys.append('log(Ri)')
return keys
• in the file derived_variables.py, add two functions get_newvar and calc_newvar to calculate the new vari-
able
• in the file croco_gui_xarray.py, add the calls to the new function get_newvar in updateVariableZ, onTime-
SeriesBtn and onVerticalProfileBtn
NINETEEN
NBQ TUTORIAL
CROCO-NBQ kernel solves the compressible and non-hydrostatic Navier-Stokes equations. This kernel can be
used to simulate complex nonlinear, nonhydrostatic physics in a realistic but computationally-affordable config-
uration. Non-hydrostatic effects become important when the horizontal and vertical scales of motion are similar.
In oneanic models this typically arises with horizontal scales of the order of 1 km resolved with grid intervals
of order 100 m. For motions of larger scale that are resolved with grid intervals of order 1 km, the hydrostatic
approximation is well satisfied.
Accurate simulation of nonhydrostatic effects requires to resolve very small horizontal scales. The explicit repre-
sentation of fine-scale turbulent processes requires a significant number of fundamental numerical choices, such
as adapted advective schemes, adaptaed parametrizations, adapted boundary conditions . . . In the sections you
will find some recommendations about the most adapted numerical schemes for Large-Eddy Simulations (LES).
89
Croco Tutorials, Release 1.3
Non-monotonic vertical advection schemes (Akima for TS, Spline for UV)
On the contrary, TVD and WENO5 schemes enable sharper shock predictions and as they preserve monotonicity
they do not generate spurious oscillations in the solution (figure below).
Monotonic or quasi-monotonic vertical advection schemes (WENO5 for TS, TVD for UVW)
Recommended advection schemes for LES :
CPP options of Momentum Advection
or
# undef REGIONAL
Explore the CPP options selected for KH_INST case and undef MPI
after #elif defined KH_INST
# undef MPI
#
# compiler options
#
FC=$FC
#
# set MPI directories if needed
#
MPIF90=$MPIF90
MPIDIR=$(dirname $(dirname $(which $MPIF90) ))
MPILIB="-L$MPIDIR/lib -lmpi -limf -lm"
MPIINC="-I$MPIDIR/include"
cp ~/croco/croco/TEST_CASES/croco.in.KH_INST croco.in
ncview khinst_his.nc
/* Non-Boussinesq */
# define NBQ
• To set up adapted time steps to your NBQ configuration (dt & NDTFAST in croco.in file), you can activate
in cppdefs_dev.h
• DIAG_CFL : activate diagnostics of the CFL criteria
# define DIAG_CFL
If DIAG_CFL is defined, at each NINFO during the run, CFL criteria are indicated in your output file :
• INT_3DADV : Slow (baroclinic) mode CFL criterion. This parameter depends on your mesh grid size and
your ocean current intensity (time-varying diagnostic). It should be inferior to approximately 1 (depending
on the advection scheme, i.e. section 4.2.5).
• EXT_GWAVES : CFL criterion based on the barotropic wave speed. It should be inferior to 0.89 (i.e.
section 4.2.5).
• NBQ_HADV : CFL criterion based on the pseudo-acoustic wave speed. This parameter should be inferior
to 1.7.
• Compile your model
• Edit croco.in file, add the following line
OBC_NBQ OBC
OBC_NBQORLANSKI Radiative conditions
NBQ_NUDGING interior/bdy forcing/nudging
NBQCLIMATOLOGY interior/bdy forcing/nudging
NBQ_FRC_BRY bdy forcing/nudging
In CROCO-NBQ, the ”fast mode” includes in addition to the external (barotropic) mode, the pseudo-acoustic
mode that allows computation of the nonhydrostatic pressure within a non-Boussinesq approach (Auclair
et al., 2018). A two-level time-splitting kernel is thus conserved, but the fast time step integrates a 3D-
compressible flow. Hence, acoustic waves or “pseudo-acoustic” waves have indeed been re-introduced to
avoid Boussinesq-degeneracy which inevitably leads to a 3D Poisson-system in non-hydrostatic Boussinesq
methods and to reduce computational costs. As long as “pseudo-acoustic” waves remain faster than the
fastest physical processes in the domain, their phase-velocity can artificially be slowed down rendering un-
physical high-frequency processes associated with bulk compressibility but preserving a coherent slow non-
hydrostatic dynamics with a softening of the CFL criterion. More details are given on http://poc.omp.obs-
mip.fr/auclair/WOcean.fr/SNH/Pub/Tutorials/CROCO/Html_maps/Croco2018_map.html.
Auclair, F., Bordois, L., Dossmann, Y., Duhaut, T., Paci, A., Ulses, C., Nguyen, C., 2018. A non-hydrostatic non-
Boussinesq algorithm for free-surface ocean modelling. Ocean Modelling 132, 12–29. https://doi.org/10.1016/j.
ocemod.2018.07.011
Related CPP options (for developers):
NBQ_IMP The equation of motion for vertical velocity is solved implicitly in the vertical direction.
NBQ_THETAIMP
The semi-implicit theta method is used to reduce the numerical dissipation iduced by the implic-
itation of the vertical velocity equation in the vertical direction (i.e. Fringer et al. 2006)
NBQ_HZ_Prognostic
Prognostic the grid evolution
NBQ_AM4 Classical fourth-order Adams-Moulton (AM4) time-stepping method
NOT_NBQ_AM4
Forward-Backward time-stepping method
NBQ_MASS Perfect conservation of mass (undef NBQ_MASS : perfect conservation of volume)
NBQ_HZCORRECT
The sigma vertical grid is updated at each fast time step to reflect the newly solved elevations
(as the free surface is now explicitly resolved at each fast time step).
NBQ_GRID_SLOW
The sigma vertical grid is updated only at each slow time step (reduce the computational time).
HZR Hzr Trick to change the name of a variable in the equation of mass conservation
TWENTY
COUPLING TUTORIAL
Here you will be guided to build a configuration and run it in forced and coupled modes using the tools provided
in croco_tools/Coupling_tools and croco/SCRIPTS/SCRIPTS_COUPLING.
1. Compilation
• Compile OASIS
• Compile your models in coupled mode with the same compilers and netcdf libraries
2. Namelists
• Define the namelist for OASIS: namcouple
• Check/edit the namelists and input files of the different models (CROCO= croco.in, WW3:
ww3_grid.inp, ww3_shel.inp, WRF: namelist.input, MNH: EXSEG1.nam, TOY:
TOYNAMELIST.nam)
3. Restart files
• Create restart files for the coupler
• If you are coupling nested models to CROCO, create a cplmask file
• Create restart/input files for the different models (see Preprocessing)
4. Run
• Launch the models simultaneously, e.g.:
5. Outputs
• Check logs and ouptuts, especially:
– The debug.root.0? files
– The model log files (e.g. croco.log)
– If you have problems in your coupled run, first check the dimensions of the grids in all grid
files (models grid files and OASIS grids and masks files)
The coupling tools provided with the model will perform steps 2-4. In the following tutorial, you will be guided
through all the steps. First, you will try a simple coupling example to help you understand the coupling philosophy
and steps to run a coupled simulation, then you can go to the advanced tutorial to perform coupled simulations
using the provided coupling tools and scripts.
97
Croco Tutorials, Release 1.3
Note: In case of error during compilation, refer to the “Tips in case or error during compilation” below
1. You need to have dowloaded the OASIS sources (see Download section). They are assumed, in the follow-
ing, to be under: $HOME/oasis/oasis3-mct
2. Then, explore the oasis3-mct directory, you will find:
• doc: oasis documentation
• lib: mct, psmile, and scrip libraries folders
• util: with notably make_dir folder containing TopMakefileOasis3, make.inc, and several
make.* for different machines
• examples
3. Enter the make_dir directory:
cd ~/oasis/oasis3-mct/util/make_dir
include $(home)/oasis/oasis3-mct/util/make_dir/make.YOURMACHINE
Note: In case of error during compilation, note that classical errors are associated to:
• files missing executable permission
• issues in the paths given in make.yourmachine
• compilation options that have to be set carefully (in make.YOURMACHINE)
1. To work in coupled mode you need to activate OA_COUPLING and/or OW_COUPLING in cppdefs.h:
#define OA_COUPLING
#define OW_COUPLING
Warning: MPI is mandatory for coupling, even if the run is launched on 1 CPU. Indeed the MPI
communicator is used to communicate with OASIS.
2. Edit all the usual paths, compilers, libraries in jobcomp, and notably OASIS path PRISM_ROOT_DIR:
# set OASIS-MCT (or OASIS3) directories if needed
#
PRISM_ROOT_DIR=~/oasis/compile_oasis3-mct
Warning: -O3 compilation option is quite agressive and may result in some errors on some machines
and with some compilers during coupled run (e.g. stokes velocities set to 0). To avoid such errors, set
optimization to -02.
3. And compile:
./jobcomp >& compile_coupled.log
If compilation aborts (netcdf errors in oasis functions), you may need to change the following lines to:
LDFLAGS1="$LDFLAGS1 $LIBPSMILE $NETCDFLIB"
CPPFLAGS1="$CPPFLAGS1 ${PSMILE_INCDIR} $NETCDFINC"
FFLAGS1="$FFLAGS1 ${PSMILE_INCDIR} $NETCDFINC"
cd ~/CONFIGS/BENGUELA_LR_cpl/TOY_IN
ln -sf Makefile.YOURMACHINE Makefile
If the compilation is successfull you should have the TOY executable toy_model
Note: Currently the distributed version of WRF does not include coupling with waves, if you want to use such
functionality you can use the fork including modifications for coupling with WW3 and CROCO through the OA-
SIS coupler, but note that this is a development version. . . https://github.com/wrf-croco/WRF/tree/WRF-CROCO
cd ~/wrf/WRFV4.2.1
# cleaning before configure (must be done if you re-compile)
./clean -a
# Then launch configure
./configure
Choose distributed memory option (dm) and compiler option in adequation with your machine setup (in our
case it will be #24).
Note:
• For creating model output files larger than 2Go, you should consider using netcdf large file support
function. It is activated through the WRFIO_NCD_LARGE_FILE_SUPPORT environment variable
(set to 1).
• WRF is strict on netcdf dependencies, meaning that problems during compilation are often due to
netcdf settings. WRF uses:
– NETCDF environment variable that can be set before launching configure, otherwise configure
will ask you to provide your netcdf full path
– NETCDF4 environment variable that can be set to 1 if you want to use netcdf 4 facilities (if your
netcdf library allows it). When using netcdf4 library, check if all dependencies are properly set,
they are usually found with nf- config --flibs command
– always check all the lines associated to netcdf library and dependencies in the generated
configure.wrf: NETCDF4_IO_OPTS, NETCDF4_DEP_LIB, INCLUDE_MODULES (last
line should be netcdf inlcude path), LIB_EXTERNAL (last line should be netcdf library and its
dependencies).
2. Check and edit the generated configure.wrf file. Notably edit the parallel compiler lines:
DM_FC = mpiifort
DM_CC = mpiicc
Note: WRF supports using multiple processors for compilation. The default number of processors used
is 2. But you can compile with more processors by using the J environment variable set (example for 8
processors: J=-j 8).
Note: WRF compilation will take a while (about 1h) and may take a lot of memory. You may need to
launch compilation in a job. Examples for a few machines are provided here, along with a script to help you
compile:
~/croco/croco/SCRIPTS/SCRIPTS_COUPLING/WRF_IN/*.compile.wrf.*
~/croco/croco/SCRIPTS/SCRIPTS_COUPLING/WRF_IN/make_WRF_compil
5. To compile in coupled mode, you need to edit configure.wrf, first copy it to configure.wrf.
coupled:
cp configure.wrf configure.wrf.coupled
INCLUDE_MODULES = $(MODULE_SRCH_FLAG) \
$(ESMF_MOD_INC) $(ESMF_LIB_FLAGS) \
-I$(WRF_SRC_ROOT_DIR)/main \
-I$(WRF_SRC_ROOT_DIR)/external/io_netcdf \
-I$(WRF_SRC_ROOT_DIR)/external/io_int \
-I$(WRF_SRC_ROOT_DIR)/frame \
-I$(WRF_SRC_ROOT_DIR)/share \
-I$(WRF_SRC_ROOT_DIR)/phys \
-I$(WRF_SRC_ROOT_DIR)/chem -I$(WRF_SRC_ROOT_DIR)/inc \
-I$(OA3MCT_ROOT_DIR)/build/lib/mct \
-I$(OA3MCT_ROOT_DIR)/build/lib/psmile.MPI1 \
-I$(NETCDFPATH)/include \
LIB_EXTERNAL = \
-L$(WRF_SRC_ROOT_DIR)/external/io_netcdf -lwrfio_nf \
-L$(OA3MCT_ROOT_DIR)/lib -lpsmile.MPI1 -lmct -lmpeu -
˓→lscrip \
-L$(NETCDF)/lib -lnetcdff -lnetcdf
Warning: Compiling WRF in coupled mode required a lot of memory (>3.5Go). If needed, submit a
job with extra-memory to compile.
6. To compile:
./clean -a # clean before compilation
cp configure.wrf.coupled configure.wrf
./compile em_real >& compile.coupled.log
mkdir exe_coupled
cp configure.wrf exe_coupled/.
cp main/*.exe exe_coupled/.
cp compile.coupled.log exe_coupled/.
WPS compilation
cd ~/wrf/WPS # note that you should use the WPS version consistent with your
˓→WRF version!
./clean -a
./configure
Choose distributed memory option (dm) and compiler option in adequation with your machine setup.
2. Check and edit configure.wps, notably WRF_DIR and compilers:
WRF_DIR = ../WRF
DM_FC = mpiifort
DM_CC = mpiicc
3. Compile WPS:
geogrid.exe
ungrib.exe
metgrid.exe
cd ~/ww3/model/bin
– Also, the switches to interpolate in time current or wind need to be set to 0 in coupled case mode
(and forced cases used to compare to coupled mode):
CRT0 WNT0
• a comp.COMPILER file
• a link.COMPILER file
The 2 later files contain useful options and links for compilation. You therefore need to check the ones that
you will use depending on you compiler and machine settings.
In this tutorial, let’s take the example of comp.Intel and link.Intel files.
2. You can edit the compilation options in comp.Intel, for instance:
opt="-c $list -O3 -ip -xHost -no-fma -fp-model precise -assume byterecl -fno-
˓→alias -fno-fnalias -module $path_m"
3. First we will compile WW3 in uncoupled mode. To do that, create an equivalent switch file than
switch_OASOCM but without coupling switches:
cp switch_OASOCM switch_UNCOUPLED
If compilation is successful, you will find your executables in ../exe, you should move these executables
to a dedicated directory:
mkdir ../exe_UNCOUPLED
mv ../exe/* ../exe_UNCOUPLED/.
5. To compile in coupled mode, check that the $OASISDIR variable correctly refers to your OASIS compile
directory, and re-setup and re-launch your compilation:
For coupling with the ocean:
./w3_clean -c
./w3_setup .. -c Intel -s OASOCM
./w3_automake
mkdir ../exe_OASOCM
mv ../exe/* ../exe_OASOCM/.
./w3_clean -c
./w3_setup .. -c Intel -s OASACM
./w3_automake
mkdir ../exe_OASACM
mv ../exe/* ../exe_OASACM/.
For coupling with both the ocean and the atmosphere, first create a switch_OASOCM_OASACM:
cp switch_OASOCM switch_OASOCM_OASACM
F90 NOGRB NC4 TRKNC DIST MPI PR3 UQ FLX0 LN1 ST4 STAB0 NL1 BT4 DB1 MLIM TR0
˓→BS0 IC2 IS0 REF1 XX0 WNT2 WNX1 RWND CRT0 CRX1 TIDE COU OASIS OASOCM OASACM
And compile:
./w3_clean -c
./w3_setup .. -c Intel -s OASOCM_OASACM
./w3_automake
mkdir ../exe_OASOCM_OASACM
mv ../exe/* ../exe_OASOCM_OASACM/.
Note: a script to help you compile the various mode is also available in: $HOME/croco/croco/
SCRIPTS/SCRIPTS_COUPLING/WW3_IN/make_WW3_compil
In case of strange errors during compilation (e.g. “catastrophic error: could not find . . . ”), try one of these
solutions:
• check your home space is not full ;-)
• check your paths to compilers and libraries (especially Netcdf library)
• check that you have the good permissions, and check that your executable files (configure, make. . . ) do are
executable
• check that your shell scripts headers are correct or add them if necessary (e.g. for bash: #!/bin/bash)
• try to exit/log out the machine, log in back, clean and restart compilation
Errors and tips related to netcdf library:
• with netcdf 4.3.3.1: need to add the following compilation flag for all models: -mt_mpi
The error associated to a missing -mt_mpi flag is of this type: ”
/opt/intel//impi/4.1.1.036/intel64/lib/libmpi_mt.so.4: could not read symbols: Bad value
“
• with netcdf 4.1.3: do NOT add -mt_mpi flag
• with netcdf4, need to place hdf5 library path in your environment:
export LD_LIBRARY_PATH=YOUR_HDF5_DIR/lib:$LD_LIBRARY_PATH
• with netcdf 4, if you use the library splitted in 2: C part and Fortran part, you need to place links to C library
before links to Fortran library and need to put both path in this same order in your LD_LIBRARY_PATH
In case of ‘segmentation fault’ error:
• try to allocate more memory with “unlimited -s unlimited”
• try to launch the compilation as a job (batch) with more allocated memory
For this first step towards coupling, we will just use the BENGUELA_LR configuration and add coupling to a toy
model that mimics a wave model. The toy model is available in the croco/SCRIPTS/SCRIPTS_COUPLING/
TOY_IN. It consists of a few fortran routines, that exchange variables with OASIS to mimic a wave or atmosphere
model. For a more advanced coupling with actual atmospheric and wave models, you can go to the other sections
of the coupling tutorial.
1. First copy the BENGUELA_LR configuration that you have already run in forced mode:
cp -r ~/CONFIGS/BENGUELA_LR ~/CONFIGS/BENGUELA_LR_cpl
2. For running in coupled mode, you first need to compile OASIS, and then re-compile CROCO in coupled
mode, and compile the TOY model. Follow the instructions in the Compilation section of the coupling
tutorial.
3. Set up the TOY model:
The toy model can send either fields from a model file (for instance generated by running a model in forced
mode previously), or constant or sinusoidal fields. Check the readme in toy_in for more informations. In
every cases, you will need to provide a grid to the toy model, here named grid_wav.nc. The toy model
will read and exchange variables specified in the TOYNAMELIST.nam from an input file, here named
toy_wav.nc. First edit the TOYNAMELIST.nam file: exchanged field names and number of time steps
In the current example, the toy model is set to run 72 time steps of 3600s.
For this tutorial, we will thus use the toy_wav.nc and grid_wav.nc files provided. You should have
the following executable, namelist, and input files to use the TOY model:
• toy_model
• TOYNAMELIST.nam
• grid_wav.nc
• toy_wav.nc
4. Set up CROCO:
Edit the croco.in to run over the same duration:
20*T
wave_average_fields: hrm frq action k_xi k_eta eps_b eps_d Erol
˓→eps_r
20*T
5. Edit OASIS namelist, namcouple, to specify which fields will be coupled. A basis of namcouple files
can be found in the croco/SCRIPTS/SCRIPTS_COUPLING/OASIS_IN directory. Copy the relevant
namcouple:
cp ~/croco/croco/SCRIPTS/SCRIPTS_COUPLING/OASIS_IN/namcouple.base.ow.
˓→toywav ~/CONFIGS/BENGUELA_LR_cpl/namcouple
In this namcouple, you will have to edit all the fields denoted into brackets <...>. Let’s browse
the namcouple file. It has several sections:
• A first section with general settings:
– the number of fields to exchange (in our case 7: 3 from the ocean to the wave
model (SSH, UOCE, VOCE), and 4 from the wave to the ocean model (HS,
T0M1, SDIR, CDIR))
– the number and names of model executables: here names must be of 6 characters
exactly, so you need to move your model executable names to these 6-character
names:
mv croco crocox
mv toy_model toyexe
– the duration of the run in seconds: you need to change <runtime> to your
actual duration (3days * 24h * 3600s): 259200
– the debug level (see detailed explanation in the comments in the namcouple file)
• A second section, with the informations on exchanged fields. A typical sub-section
for one exchanged field looks like:
line 1: field in sending model, field in target model, unused, coupling period,
number of transformations (here 1 interpolation), restart file, field status
line 2: nb of pts for sending model grid (without halo) first dim, and second
dim, for target grid first dim, and second dim, sending model grid name, target
model grid name, lag = time step of sending model
line 3: sending model grid periodical (P) or regional (R), and nb of overlap-
ping points, target model grid periodical (P) or regional (R), and number of
overlapping points
line 4: list of transformations performed (here only grid interpolation SCRIPR
keyword, see OASIS documentation for more informations)
line 5: parameters for each transformation (here distributed weight interpola-
tion, see OASIS documentation for more informations)
You need to edit all the fields denoted into brackets: <...>:
<cpldt> the coupling frequency in seconds for each field you will ex-
change
<ocenx> the number of points in xi direction for CROCO (see param.h)
<oceny> the number of points in eta direction for CROCO (see param.h)
<wavnx> the number of points in x direction for the TOY model (see
grid_wav.nc file)
<wavny> the number of points in y direction for the TOY model (see
grid_wav.nc file)
<ocedt> the CROCO time step
<wavdt> the TOY model time step (see TOYNAMELIST.nam)
6. Finally, you need to prepare restart files for the coupler (in addition to model initial/restart files). To do so,
two scripts are provided in the Coupling tools to start from calm conditions or previously existing files. In
our case we will start from calm conditions. Note that this script uses the nco library, so that you should
have it installed/loaded to run the script:
˓→BENGUELA_LR_cpl/.
# launch the creation of restart file for OASIS for the toy model:
# first argument: grid name
# second argument: restart file name
# third argument: type of model
# fourth argument: list of variables to initialize to 0
./create_oasis_restart_from_calm_conditions.sh grid_wav.nc wav.nc toy
˓→"TOY_T0M1 TOY___HS TOY_CDIR TOY_SDIR"
# launch the creation of restart file for OASIS for CROCO model:
./create_oasis_restart_from_calm_conditions.sh CROCO_FILES/croco_grd.
˓→nc oce.nc croco "SRMSSHV0 SRMVOCE0 SRMUOCE0"
You should have now in your configuration directory wav.nc and oce.nc, which are the
OASIS restart files.
7. You are now ready to run CROCO in coupled mode with the toy model:
debug.root.02 # OASIS log file for the master processor for model #2
˓→(CROCO in our case)
Note: If you have problems running the coupled model, you need to check:
• The dimensions of the grids in all grid files (models grid files and OASIS grids and masks
files)
You can then check your new CROCO outputs in CROCO_FILES (you can see that you have
the additional wave fields outputs (e.g. hrm) and if you can see small differences of the surface
currents for example if you do a difference of coupled and non-coupled CROCO outputs).
8. If you want then to use actual coupling with an atmospheric or wave model, and run production simulation
in coupled mode, follow the next steps of the Coupling tutorial. It uses the full Coupling toolbox provided in
croco_tools/Coupling_tools and croco/SCRIPTS/SCRIPTS_COUPLING. It will help you create a dedicated
architecture for coupled runs, and it will provide you a set of scripts for running coupled simulation without
managing all the files one by one. Basically, the Coupling toolbox will manage:
• CROCO compilation if requested
• Copying the model executables to your configuration directory
• Getting models input files
• Preparing OASIS restart files
• Editing namelists, that is replacing automatically all the fields into brackets <...> in the different
namelist files (for all models and for OASIS)
• Launching the run
• Putting output files in a dedicated output directory
• Putting restart files for a future run in a dedicated restart directory
• Eventually launching the next job if requested
If you have successfully run the simple CROCO-TOY coupled example, and you want to perform more advanced
coupled simulation, you can follow this advanced coupling tutorial.
Note that it requires to be quite familiar with the various models to couple.
A set of coupling tools has been designed to help building and runing coupled configurations. It is provided within
the croco/SCRIPTS/SCRIPTS_COUPLING directory.
Some pre-processing tools are also provided in the croco_tools/Coupling_tools directory.
First the contents of the SCRIPTS_COUPLING toolbox will be described, and then the different steps for running
a coupled simulation.
In OASIS_IN:
In CROCO_IN:
croco.in.base
Base namelist file for CROCO (timestepping, input,
output. . . ),
in which <...> will be replaced by oce_nam.sh
from SCRIPTS_TOOLBOX
In WRF_IN:
CONFIGURE_WRF/MACHINE
configure.wrf.coupled Example of configure file for compiling wrf in cou-
pled mode
configure.wrf.uncoupled Example of configure file for compiling wrf in forced
mode
In WW3_IN:
In SCRIPTS_TOOLBOX
*_nam.sh
Update pre-filled namelist with mynamelist.sh
informations
chained_job.sh
Submit all jobs at the beginning with the following
having
condition on the previous
caldat.sh Return the calendar date and time given julian date
julday.sh
Calculate the Julian Day Number for a given month,
day,
and year
NAMELIST
namelist_*
Different namelists which are concatenated, in
create_config, to build mynamelist.sh
PATHS
path_*.sh Script used in create_config to build
myenv_mypath.sh
OASIS_SCRIPTS
create_oasis_grids_for_wrf.sh
Script to create grids.nc and masks.nc files for
OASIS for WRF (useful only if you are using a
version of WRF
in which the oasis function is not implemented. In
the wrf-croco
20.4. Advanced coupling tutorial 111
fork the function is implemented and this script is not
used).
create_oasis_restart_from_cal. . .
Croco Tutorials, Release 1.3
CROCO
README_preprocess_croco Readme to use croco_tools classic pre-processing (in
matlab)
README_nest_cpl Readme to prepare nests in coupled runs
make_grid_from_WRF.m
Script to generate a grid for CROCO from WRF grid
with
eventually a refinement coefficient
find_childgrid_inparentgrid.m
Script to Find the position of a nested grid in the
parent
before using AGRIF tools
script_make_CFSR_wind_for_ww3.sh Script to create wind input file for WW3 from CFSR
script_make_WRF_wind_for_ww3.sh Script to create wind input file for WW3 from WRF
script_make_CROCO_current. . . .sh Script to create current and level input files for WW3
UV2T.sh
Useful functioni to change from U,V to T grid, used
in
above-mentionned scripts
WRF_WPS
README_download_CFSR_data Some useful readme for WPS
README_wps Some useful readme for WPS
README.Vtable Some useful readme for WPS
configure.namelist.wps Configure file to edit for running WPS
Vtable.CFSR_sfc_flxf06 Vtables for CFSR data
Vtable.CFSR_press_pgbh06 Vtables for CFSR data
Vtable.GDAS_4soillevel_my Vtable for GFS/GDAS data
METGRID.TBL.GDAS Table for Metgrid
job.wps.* Job scripts to run WPS pre-processing
run_wps.bash Script to run wps (wrf pre-processing)
CONFIGRE_WPS Examples of configure files for compiling WPS
1 MPMD (Multiple Program Multiple Data) is supported on some machines. Different executables are launched and communicate with
each other using MPI; all MPI processes are included within the same MPI_COMM_WORLD communicator. This execution method uses a
text file (call here app.conf) which contains the mapping between MPI processes and executables
Footnote
The idea of the coupling tools is to facilitate the management of coupled configurations, the run, and displacement
of I/O.
First step is to create a configuration with the usual create_config.bash script, by specifying wich models
you want to use in the models options.
From there a configuration architecture will be built:
HOMEDIR/CONFIGS/MY_CONFIG_NAME
create_config.bash.bck
myenv_mypath.sh
mynamelist.sh
myjob.sh
submitjob.sh
- SCRIPTS_TOOLBOX
- PREPRO
- OASIS_IN
- CROCO_IN
- WW3_IN
- WRF_IN
- XIOS_IN
WORKDIR/CONFIGS/MY_CONFIG_NAME
- OASIS_FILES
- CROCO_FILES
- WW3_FILES
- WRF_FILES
- DATA
To prepare your configuration working directory, you can use the script create_config.bash provided in
CROCO sources:
cp ~/croco/croco/create_config.bash ~/CONFIGS/.
˓→#==========================================================================================
# Configuration name
# ------------------
MY_CONFIG_NAME=BENGUELA_cpl
Run create_config.bash:
./create_config.bash
Go into your configuration directory, open, check and eventually edit paths in myenv_mypath.sh, and source
it (you need to be in a bash environment):
source myenv_mypath.sh
CROCO preprocessing
WW3 pre-processing
WW3 GRIDGEN
Preprocessing tools for WW3 have been developed under Matlab software. They are available in the GRID-
GEN matlab package (a tutorial is available here: ftp://ftp.ifremer.fr/ifremer/ww3/COURS/WAVES_SHORT_
COURSE/TUTORIALS/TUTORIAL_GRIDGEN/waves-workshop-exercise-gridgen.pdf).
Basic steps for regular grids are summarized here:
1. Define your grid parameters:
dx= ... # in degrees
dy= ... # in degrees
lon1d=[...:dx:...] # in degrees
lat1d=[...:dy:...] # in degrees
[lon,lat]=meshgrid(lon1d,lat1d);
2. Coastline (defined as polygons in coastal bound . . . .mat) and bathy (e.g., etopo1.nc) files are used. Some
threshold values are set up:
lim_wet=... ; # proportion of cell from which it is considered " wet"
cut_off=0; # depth at which cell is considered as "wet"
dry_val=999; # value given to "dry" cells
4. Definition of boundaries:
lon_start=min(min(lon))-dx;
lon_end=max(max(lon))+dx;
lat_start=min(min(lat))-dy;
lat_end=max(max(lat))+dy;
(continues on next page)
m=ones(size(depth));
m(depth==dry_val)=0;
b_split=split_boundary(b,5*max([dx dy])); # splitting to make computation more
˓→efficient
lim_wet=0.5;
offset=max([dx,dy]);
# mask cleaning remove lonely wet cells close to the coastline:
m2=clean_mask(lon,lat,m,b_split,lim_wet,offset);
cell_limit=-1 ; # if this value is negative all water bodies except the larger
˓→are considered dry (\ie remove all lakes or closed seas), if positive: has
write_ww3file([data_dir,’/’,’bottomm2’,’.inp’],depth’.*(-1));
• build the mask for WW3: mask=1 is water, mask=0 is for points which won’t be computed, mask=2
for active boundary points
• write the mask file:
write_ww3file([data_dir,’/’,’mapsta’,’.inp’],mm’);
Alternative
Alternatively, you can build the grid input files from a CROCO grid file. A script is provided in
Coupling_tools/WW3: make_ww3_grd_input_files_from_croco_grd.m
Warning: Do not put the mask to 0 all around your domain, it will create problems in OASIS interpolations.
You can either set 1 for sea points or 2 for boundary points.
Eventually, wind, current, and water level forcing files with a valid time axis have to be prepared (if you need
them as forcing for your WW3 run, not requested in full ocean-wave-atmosphere coupled mode).
A few scripts for preparing ww3 forcing files from CROCO (current and water level, WRF (wind) and
CFSR (wind) files already processed through Process_CFSR_files_for_CROCO.sh are provided in
croco_tools/Coupling_tools/WW3:
• script_make_CROCO_current_and_level_for_ww3.sh
• script_make_WRF_wind_for_ww3.sh
• script_make_CFSR_wind_for_ww3.sh
WW3 routines are named ww3_ROUTINENAME and take as input file by default: ww3_ROUTINENAME.inp.
You have to set parameters in these .inp input files before running.
Steps for WW3 pre-processing are:
./ww3_grid # To prepare the grid and run (NB: timesteps are defined in ww3_grid.
˓→inp file)
./ww3_prnc # To prepare wind forcing if you want to use one (not mandatory)
./ww3_strt # To prepare initialisation (not mandatory, will take defalut rest
˓→state if not runned)
These steps will be performed automatically by the coupling scripts, when you submit the job.
Note: Note on mask/mapsta and bathy in WW3: The input map status (MAPSTA) value in the mask file can be :
• -2 : excluded boundary points (sea points covered by ice)
• -1 : excluded sea points (sea points covered by ice)
• 0 : excluded points (land)
• 1 : sea points (ocean)
• 2 : active boundary points • 3 : excluded
• 7 : ice
The final possible values of the output map status MAPSTA are :
• -5 : other disabled point
• -4 : point masked in the two-way nesting
• -3 : dry point covered by ice
• -2 : dry point, not covered by ice
• -1 : wet point covered by ice
• 0 : land point
• 1 : active sea point
• 2 : active boundary point
• 8 : excluded sea/ice point
• 7 : excluded sea point, considered iced
• 15 : excluded sea point, considered dried: can become wet
• 31 : excluded sea point, inferred in nesting
• 63 : excluded sea point, masked in 2-way nesting
Coastline limiting depth (m, negative in the ocean) defined in ww3 grid.inp will also affect your MAPSTA: points
with depth values above this coastline limit will be transformed to land points and therefore considered as excluded
points (never become wet points, even if the water level grows over). In the output of the model, the depth (dpt)
is described as : DEPTH = LEV - BATHY, in which the bathy is negative in the sea and positive on land, so the
depth will be positive in the sea and a fillvalue on land. When the input water level (LEV) increases, it increases
the output depth (DPT) value. The input water level forcing value is stored in WLV output variable, thus it gives
the possibility to retrieve the input bathy value at each grid point : BATHY = WLV - DPT.
WRF preprocessing
Running WPS
./g2print.exe YOURDATAFILE
• Vtable to read the grib data: exsiting Vtables can be found in WPS source directory under
WPS/ungrib/Variable_Tables, and informations to choose Vtables can be found here:
http://www2.mmm.ucar.edu/wrf/users/download/free_data.html
Note: For CFSR, you will need 2 Vtables: one for the fields on pressure levels,
one for the fields on surface level. Both Vtables are available in croco_tools/
Coupling_tools/WRF_WPS directory:
Vtable.CFSR_press_pgbh06
Vtable.CFSR_sfc_flxf06
ungrib therefore needs to be run twice (once for each type). This is done in run_wps.
bash (see below).
A few scripts have been made to help you run WPS. You can find them in your croco_tools/
Coupling_tools/WRF_WPS directory:
• configure.namelist.wps
• run_wps.bash
• job.wps.*
1. You should find them in YOURCONFIG/PREPRO/WRF_WPS. Edit all the required lines in configure.
namelist.wps, and edit all the required paths in run_wps.bash
2. Run WPS directly (or using job.wps.pbs if you need to submit it in batch):
geo_em.d01.nc
geo_em.d02.nc
met_em.d01.....nc # numerous files where ’...’ are dates
met_em.d02.....nc # numerous files where ’...’ are dates
3. Check your metgrid files by looking at some variables with ncview (e.g. LANDMASK, PSFC, PSML,
SKINTEMP, TT . . . )
If some variables are missing, it is probably because you did not process ungrib and metgrid for all
your input data.
If something appears weird, it may be due to a bad interpolation (for example due to a too coarse
land-sea mask in the original data). If so, re-run WPS with an updated METGRID.TBL
Running real.exe
After running WPS pre-processing, you need to run real.exe program which actually creates WRF input files
for realistic cases from WPS generated files.
Warning: You need to use real.exe from uncoupled compilation even for a coupled run
A script has been made to help you run real.exe: run_real.bash. You can find it in your ~/CONFIGS/
BENGUELA_cpl/WRF_IN directory or in the croco/SCRIPTS/SCRIPTS_COUPLING/WRF_IN. It also
uses:
• configure.namelist.wps
• namelist.input.base.complete
1. Edit user settings in run_real.bash: paths, MPI settings. . .
2. Eventually edit namelist.input.base.complete with you choice of parameterization. DO NOT
EDIT the stuff placed into brackets: <...>, it will be replaced by run_real.bash with appropriate
values.
Warning: For coupling with waves and currents, only YSU surface and boundary layer schemes are
possible at the moment. Be sure to select these.
wrfinput_d01_DATE
wrfbdy_d01_DATE
wrflowinp_d01_DATE # if sst_update is set to 1
wrfdda_d01_DATE # if nudging is activated i
wrf*_d02_DATE # if you have 2 domains
OASIS pre-processing
To run models in coupled mode, you need to have completed the compilation and the preprocessing phases for
each model. Then choose the case you desire in the list below
CROCO-TOY (wav or atm)
In this case you should have in your $CHOME repository:
• myenv_mypath.sh
• mynamelist.sh
• myjob.sh
• CROCO_IN
• TOY_IN
• OASIS_IN
• SCRIPTS_TOOLBOX
myenv_mypath.sh should already have been filled in before the compilation. In TOY_IN, you must have the
executable toy_model
To make the run you need to modify the files myjob.sh and mynamelist.sh.
• In myjob.sh , you will have to fill in information about jobs:
# Real job duration in sec (converted to MACHINE format in submit job)
export TIMEJOB=1800
#------------------------------------------------------------------------------
˓→-
# Your run can be divided into several jobs (e.g.: 1 year run into 12 jobs of
˓→1 month)
Along with the number of cpu you will use for each model:
# nb of CPUs for each model
export NP_OCEX=2
export NP_OCEY=2
export NP_TOY=2
There are other more advanced options, but we will not focus on them for now.
• In mynamelist.sh, specify the name of the experiment, the run type (frc, oa, ow, owa), and which
models are used. From here, we will consider the toy model as an atmospheric model:
export CEXPER=BENGUELA_oa_toyatm
export RUNtype=oa
#
export USE_OCE=1
export USE_TOYATM=1
export USE_TOYOCE=0
export USE_TOYWAV=0
#
Set the exe path (for croco it is usually CROCO_IN, corresponding to OCE_NAM_DIR in myenv_mypath.sh):
#-------------------------------------------------------------------------------
# Exe paths
# ------------------------------------------------------------------------------
export OCE_EXE_DIR="${CHOME}/CROCO_IN"
export TOY_EXE_DIR="${CHOME}/TOY_IN"
#-------------------------------------------------------------------------------
# CPL
#-------------------------------------------------------------------------------
# namelist
export namcouplename=namcouple.base.${RUNtype}${istoy}
# coupling frequency
export CPL_FREQ=21600
#-------------------------------------------------------------------------------
# OCE
#-------------------------------------------------------------------------------
# namelist [Info: grid size is directly read in oce_compile.sh and cpl_nam.sh ]
# Online Compilation
export ONLINE_COMP=1
# Time steps
export TSP_OCE=800
export TSP_OCEF=60
# Parameter
export hmin=75; # minimum water depth in CROCO, delimiting coastline in WW3
# domains
export AGRIFZ=0
export AGRIF_2WAY="FALSE"
# forcing files
export ini_ext='ini_SODA' # ini extension file (ini_SODA,...)
export bdy_ext='bry_SODA' # bry extension file (bry_SODA,...)
export surfrc_flag="FALSE" # Flag if surface forcing is needed (FALSE if cpl)
export interponline=0 # switch (1=on, 0=off) for online surface interpolation
export frc_ext='blk_CFSR' # surface forcing extension(blk_CFSR, frc_CFSR,...). If
˓→interponline=1 just precise the type (ECMWF, CFSR,AROME,...)
# output settings
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
˓→!!!!!!!
# WARNING
˓→ !
# When XIOS is activated the following values (for the model) are not taken into
˓→account !
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
˓→!!!!!!!
Since the toy model mimics an atmospheric model, put "atm" in the list of models. You must also provide the
input file you put in toy_files:
#-------------------------------------------------------------------------------
# TOY
#-------------------------------------------------------------------------------
# type
export toytype=("atm") #oce,atm,wav
# forcing files
export toyfile=("$CWORK/TOY_FILES/wrfout_d01_20050101_20050131.nc")
export timerange=('2,125')
Now that you have completed the necessary files, you are ready to run your simulation. To do so, simply do:
./submitjob.sh
CROCO-WRF
In your ${CHOME} directory, you should have already filled in myenv_mypath.sh.
To make the run, you need to modify the files myjob.sh and mynamelist.sh.
• In myjob.sh , you will have to fill in information about jobs:
#------------------------------------------------------------------------------
˓→-
# Your run can be divided into several jobs (e.g.: 1 year run into 12 jobs of
˓→1 month)
Along with the number of cpu you will use for each model:
There are other more advanced options, but we will not focus on them for now.
• In mynamelist.sh, specify the name of the experiment, the run type (frc, oa, ow, owa), and which
models are used.:
#
export CEXPER=BENGUELA_oa
export RUNtype=oa
#
export USE_ATM=1
export USE_OCE=1
Set the exe path (for croco it is usually CROCO_IN, corresponding to OCE_NAM_DIR in myenv_mypath.sh):
#-------------------------------------------------------------------------------
# Exe paths
# ------------------------------------------------------------------------------
export OCE_EXE_DIR="${CHOME}/CROCO_IN"
export ATM_EXE_DIR="${ATM}/exe_coupled"
# coupling frequency
export CPL_FREQ=21600
#-------------------------------------------------------------------------------
# OCE
#-------------------------------------------------------------------------------
# namelist [Info: grid size is directly read in oce_compile.sh and cpl_nam.sh ]
# Online Compilation
export ONLINE_COMP=1
# Time steps
export TSP_OCE=800
export TSP_OCEF=60
# Parameter
export hmin=75; # minimum water depth in CROCO, delimiting coastline in WW3
# domains
export AGRIFZ=0
export AGRIF_2WAY="FALSE"
# forcing files
export ini_ext='ini_SODA' # ini extension file (ini_SODA,...)
export bdy_ext='bry_SODA' # bry extension file (bry_SODA,...)
export surfrc_flag="FALSE" # Flag if surface forcing is needed (FALSE if cpl)
export interponline=0 # switch (1=on, 0=off) for online surface interpolation
export frc_ext='blk_CFSR' # surface forcing extension(blk_CFSR, frc_CFSR,...). If
˓→interponline=1 just precise the type (ECMWF, CFSR,AROME,...)
# output settings
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
˓→!!!!!!!
# WARNING
˓→ !
# When XIOS is activated the following values (for the model) are not taken into
˓→account !
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
˓→!!!!!!!
#-------------------------------------------------------------------------------
# ATM
#-------------------------------------------------------------------------------
# namelist
(continues on next page)
# Time steps
export DT_ATM=150
# Grid size
#[ Grid size should be already put in the namelist. When coupled it is directly
˓→read in cpl_nam.sh ]
# domains
export NB_dom=1 # Number of coupled domains
export wrfcpldom='d01'
# Boundaries interval
export interval_seconds=21600 # interval ( in sec ) of the latteral input
export auxinput4_interval=360 # interval ( in min ) of bottom input
# output settings
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
˓→!!!!!!!
# WARNING
˓→ !
# When XIOS is activated the following values (for the model) are not taken into
˓→account !
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
˓→!!!!!!!
Now that you have completed the necessary files, you are ready to run your simulation. To do so, simply do:
./submitjob.sh
CROCO-WW3
In your ${CHOME} repository you should have already filled in myenv_mypath.sh.
To make the run, you need to modify the files myjob.sh and mynamelist.sh.
• In myjob.sh , you will have to fill in information about jobs:
#------------------------------------------------------------------------------
˓→-
# Your run can be divided into several jobs (e.g.: 1 year run into 12 jobs of
˓→1 month)
Along with the number of cpu you will use for each model:
There are other more advanced options, but we will not focus on them for now.
• In mynamelist.sh, specify the name of the experiment, the run type (frc, oa, ow, owa), and which
models are used.:
#
export CEXPER=BENGUELA_ow
export RUNtype=ow
#
export USE_OCE=1
export USE_WAV=1
Set the exe path (for croco it is usually CROCO_IN, corresponding to OCE_NAM_DIR in myenv_mypath.sh):
#-------------------------------------------------------------------------------
# Exe paths
# ------------------------------------------------------------------------------
export OCE_EXE_DIR="${CHOME}/CROCO_IN"
export WAV_EXE_DIR="${WAV}/exe_ow_BENGUELA"
#-------------------------------------------------------------------------------
# CPL
#-------------------------------------------------------------------------------
# namelist
export namcouplename=namcouple.base.${RUNtype}${istoy}
# coupling frequency
export CPL_FREQ=21600
#-------------------------------------------------------------------------------
# OCE
#-------------------------------------------------------------------------------
# namelist [Info: grid size is directly read in oce_compile.sh and cpl_nam.sh ]
# Online Compilation
export ONLINE_COMP=1
# Time steps
export TSP_OCE=800
export TSP_OCEF=60
# Parameter
export hmin=75; # minimum water depth in CROCO, delimiting coastline in WW3
# domains
(continues on next page)
# forcing files
export ini_ext='ini_SODA' # ini extension file (ini_SODA,...)
export bdy_ext='bry_SODA' # bry extension file (bry_SODA,...)
export surfrc_flag="TRUE" # Flag if surface forcing is needed (FALSE if cpl)
export interponline=0 # switch (1=on, 0=off) for online surface interpolation
export frc_ext='blk_CFSR' # surface forcing extension(blk_CFSR, frc_CFSR,...). If
˓→interponline=1 just precise the type (ECMWF, CFSR,AROME,...)
# output settings
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
˓→!!!!!!!
# WARNING
˓→ !
# When XIOS is activated the following values (for the model) are not taken into
˓→account !
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
˓→!!!!!!!
#-------------------------------------------------------------------------------
# WAV
#-------------------------------------------------------------------------------
# namelist
# Time steps
export DT_WAV=3600 # TMAX = 3*TCFL
export DT_WW_PRO=1200 # TCFL --> ww3.grid to see the definition
export DT_WW_REF=1800 # TMAX / 2
export DT_WW_SRC=10
# Grid size
export wavnx=41 ; export wavny=42
# forcing files
export forcin=() # forcing file(s) list (leave empty if none)
export forcww3=() # name of ww3_prnc.inp extension/input file
# output settings
export flagout="TRUE" # Keep (TRUE) or not (FALSE) ww3 full output binary file
˓→(out_grd.ww3)
Now that you have completed the necessary files, you are ready to run your simulation. To do so, simply do:
./submitjob.sh
CROCO-WRF-WW3
In your ${CHOME} repository you should have already filled in myenv_mypath.sh.
To make the run, you need to modify the files myjob.sh and mynamelist.sh.
• In myjob.sh , you will have to fill in information about jobs:
#------------------------------------------------------------------------------
˓→-
# Your run can be divided into several jobs (e.g.: 1 year run into 12 jobs of
˓→1 month)
Along with the number of cpu you will use for each model:
There are other more advanced options, but we will not focus on them for now.
• In mynamelist.sh, specify the name of the experiment, the run type (frc, oa, ow, owa), and which
models are used.:
#
export CEXPER=BENGUELA_owa
export RUNtype=owa
#
export USE_ATM=1
export USE_OCE=1
export USE_WAV=1
Set the exe path (for croco it is usually CROCO_IN, corresponding to OCE_NAM_DIR in myenv_mypath.sh):
#-------------------------------------------------------------------------------
# Exe paths
# ------------------------------------------------------------------------------
export OCE_EXE_DIR="${CHOME}/CROCO_IN"
export WAV_EXE_DIR="${WAV}/exe_owa_BENGUELA"
export ATM_EXE_DIR="${ATM}/exe_coupled"
Then edit the model setting. If WW3 grid was created using make_ww3_grd_input_files_from_croco_grd.
m, wavx ( respectively wavy) is exactly the values of xi_rho (respectively eta_rho) in the croco_grd.nc file used:
#-------------------------------------------------------------------------------
# CPL
#-------------------------------------------------------------------------------
# namelist
export namcouplename=namcouple.base.${RUNtype}${istoy}
# coupling frequency
export CPL_FREQ=21600
#-------------------------------------------------------------------------------
# OCE
#-------------------------------------------------------------------------------
# namelist [Info: grid size is directly read in oce_compile.sh and cpl_nam.sh ]
# Online Compilation
export ONLINE_COMP=1
# Time steps
export TSP_OCE=800
export TSP_OCEF=60
# Parameter
export hmin=75; # minimum water depth in CROCO, delimiting coastline in WW3
# domains
export AGRIFZ=0
export AGRIF_2WAY="FALSE"
# forcing files
export ini_ext='ini_SODA' # ini extension file (ini_SODA,...)
export bdy_ext='bry_SODA' # bry extension file (bry_SODA,...)
export surfrc_flag="FALSE" # Flag if surface forcing is needed (FALSE if cpl)
export interponline=0 # switch (1=on, 0=off) for online surface interpolation
export frc_ext='blk_CFSR' # surface forcing extension(blk_CFSR, frc_CFSR,...). If
˓→interponline=1 just precise the type (ECMWF, CFSR,AROME,...)
# output settings
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
˓→!!!!!!!
# WARNING
˓→ !
# When XIOS is activated the following values (for the model) are not taken into
˓→account !
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
˓→!!!!!!!
#-------------------------------------------------------------------------------
# WAV
#-------------------------------------------------------------------------------
# namelist
# Time steps
export DT_WAV=3600 # TMAX = 3*TCFL
export DT_WW_PRO=1200 # TCFL --> ww3.grid to see the definition
export DT_WW_REF=1800 # TMAX / 2
export DT_WW_SRC=10
# forcing files
export forcin=() # forcing file(s) list (leave empty if none)
export forcww3=() # name of ww3_prnc.inp extension/input file
# output settings
export flagout="TRUE" # Keep (TRUE) or not (FALSE) ww3 full output binary file
˓→(out_grd.ww3)
#-------------------------------------------------------------------------------
# ATM
#-------------------------------------------------------------------------------
# namelist
export atmnamelist=namelist.input.base.complete
# Time steps
export DT_ATM=150
# Grid size
#[ Grid size should be already put in the namelist. When coupled it is directly
˓→read in cpl_nam.sh ]
# domains
export NB_dom=1 # Number of coupled domains
export wrfcpldom='d01'
# Boundaries interval
export interval_seconds=21600 # interval ( in sec ) of the latteral input
export auxinput4_interval=360 # interval ( in min ) of bottom input
# output settings
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
˓→!!!!!!!
# WARNING
˓→ !
# When XIOS is activated the following values (for the model) are not taken into
˓→account !
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
˓→!!!!!!!
Now that you have completed the necessary files, you are ready to run your simulation. To do so, simply do:
./submitjob.sh
second is where your job is running and where outputs/restarts are stored:
• ${CEXPER}_execute
• ${CEXPER}_output
• ${CEXPER}_restart
In those repositories, you will find one folder per job. Meaning if the simulation is 12 jobs long, you will have 12
folders.
Note: If one of your model “blow up” reduce the time_step (‘TSP_*’) in mynamelist.sh
TWENTYONE
The aim of this tutorial is to investigate gradually the capability of CROCO to deal with the nearshore dynamics.
It is built on some test-cases that are packaged within the CROCO release and will be thoroughly analysed. The
various aspects that will be adressed are the following :
• Compute a test-case,
• Modify a test-case (including a new bathymetry, modifying the forcings, . . . ),
• Use of the CROCO embedded WKB wave model,
• Parametrisation of the Bottom Boundary Layer combining wave and circulation,
• Account for the sediment compartment,
• Morphodynamics.
The tutorial is based on the rip_current, sandbar, shoreface, swash test cases. For a description of the wave-
averaged equations and WKB wave model see wci.
Rip currents are strong, seaward flows forced by longshore variation of the wave-induced momentum flux. They
are responsible for the recirculation of water accumulated at a beach by a weaker and broader shoreward flow due
to Stokes drift.
Here, we consider longshore variation of the wave-induced momentum flux due to breaking at barred bottom
topography with an imposed longshore perturbation, as in [YU2003] or [WEIR2011]. The basin is rectangular
(768 m by 768 m) and the topography is constant over time and based on field surveys at Duck, North Carolina.
Shore-normal, monochoromatic waves (1m, 10s) are imposed at the offshore boundary and propagate through the
WKB wave model coupled with the 3D circulation model (Uchiyama et al., 2011). The domain is periodic in the
alongshore direction. We assume that the nearshore boundary is reflectionless, and there is no net outflow at the
offshore boundary.
The tutorial starts by implementing and running the rip_current test case. It can be activated with the cpp key RIP
that can be followed throughout the source code to gather the main informations about the setup. The following
figure picke up in [YU2003] shows what the bathymetry looks for.
Answer the basic following questions in order to charaterize the set up:
• what is that analytical formulation of the topography, the basin size, the resolution
• characterize the wave forcing
• what are the interaction between wave and currents
• what is the formulation of the drag coefficient
Related CPP options:
133
Croco Tutorials, Release 1.3
CPP options:
# define RIP
# undef OPENMP
# undef MPI
# define SOLVE3D
# define NEW_S_COORD
# define UV_ADV
# define BSTRESS_FAST
# undef NBQ
# ifdef NBQ
# define NBQ_PRECISE
# define WAVE_MAKER
# define WAVE_MAKER_SPECTRUM
# define WAVE_MAKER_DSPREAD
# define UV_HADV_WENO5
# define UV_VADV_WENO5
# define W_HADV_WENO5
# define W_VADV_WENO5
# define GLS_MIXING_3D
# undef ANA_TIDES
# undef MRL_WCI
# define OBC_SPECIFIED_WEST
(continues on next page)
135
Croco Tutorials, Release 1.3
TWENTYTWO
Warning: This part is only given as an example of coastal configuration (resolution ~500m) in macro tidal
environment. So you have an example of which set of cpp keys to use
All inputs files (both source code, inputs text files and forcing netcdf files) can be found there :
ftp://ftp.ifremer.fr/ifremer/croco/BDS_EXAMPLE/BDS_CONFIG.tar.gz
Here we describe a coastal configuration in bay of Seine (dx,dy=500m) with realistic forcing fields :
• Grid : 481*181 and 20 vertical levels with bathymetry from SHOM (HOMONIM MNT)
Note: In shallow water conditions with Wetting and Drying meshes, vertical sigma coordinates should be equally
spaced (no refinement on bottom to avoid too small layers). To do so, put critical depth hc in croco.in to a big
value (let’s say 1e16)
• Schematic river canal for the Seine river : The resolution of the meshes is locally refined in this canal to
take into account a the section reduction based on 500m grid size
• Tidal forcing from PREVIMER atlas (700m resolution with both elevation and velocity)
– 16 waves : M2 N2 S2 K2 2N2 O1 K1 Q1 P1 M4 MS4 MN4 MK4 M6
– Harmonic composition with MAS (Simon-SHOM) method for elevation (no need to compute wave
arguments in pre-processing)
• Realistic 3D open boundary conditions (T,S) from a 2.5km grid-mesh regional model
• Online atmospheric forcing from METEO-FRANCE
– Bulk flux with Fairall formulation
– Atm pressure gradient taken into account in the equations
– Inverse barometer effect from atm pressure : correction added to ssh from harmonic composition at
OBC
137
Croco Tutorials, Release 1.3
• River inputs : realistic time series for outflow,temperature and salinity of 9 sources
• Physics
– Wetting and drying scheme
– High order advection scheme WENO5 on both horizontal and vertical dimensions for momentum and
tracers
– Bartropic and baroclinic coupling with M2_FILTER_NONE option and with myalpha=0.3
– Vertical turbulent fluxes from GLS (k-epsilon)
TWENTYTHREE
XIOS
As a start point for this tutorial, we will use the BASIN test case (see section 5.1)
cd ~/CONFIGS/BASIN
Is everything ok ? Compiling ? Running ? Are the 2 files basin_rst.nc and basin_his.nc created ?
What is the walltime (or real time)?
Now add the XIOS functionnality in the croco executable:
If the XIOS is installed on your target machine (it is the case on Datarmor), there are only 2 new
simple steps to follow :
1. Edit cppdef.h:
Need to define 2 news cpp keys fot this test case:
/*
! Basin Example
! ===== =======
*/
# define XIOS
# undef OPENMP
.....
#
# set XIOS directory if needed
#
XIOS_ROOT_DIR=$HOME/xios-2.5
#
For this tutorial, we need to comment three lines (217, 218 and 219) in jobcomp:
cp ~/croco/croco/XIOS/send_xios_diags.F .
139
Croco Tutorials, Release 1.3
! call xios_send_field("uwnd",uwnd)
! call xios_send_field("vwnd",vwnd)
! call xios_send_field("bvf",bvf)
./jobcomp
Before running the model with XIOS module, we need three xml files (field_def.xml, domain_def.xml and
iodef.xml) :
cp /home/datawork-croco/datarmor-only/CONFIGS/TUTO20/BASIN_WITH_XIOS/
˓→field_def.xml .
cp /home/datawork-croco/datarmor-only/CONFIGS/TUTO20/BASIN_WITH_XIOS/
˓→domain_def.xml .
cp /home/datawork-croco/datarmor-only/CONFIGS/TUTO20/BASIN_WITH_XIOS/
˓→iodef.xml.OneFile iodef.xml
The boolean false means that croco will run with XIOS in “attached mode”. Each computing
processor will write in the output file. In this “attached mode”, XIOS behaves like a netcdf4 layer.
In this iodef.xml file, the configuration for outputs is the same as in croco.in file.
Run the model in “attached mode”:
qsub job_croco_mpi.pbs
Note: If your output file start with a ?, it is due to a tab before the configuration title in croco.in:
Basin Example. Just replace the tab by a blank space.
—> For large configuration, XIOS is very efficient in netcdf parallel writting.
Edit iodef.xml file and add new 2D and 3D fields to be written in the output file by uncomment-
ing lines :
qsub job_croco_mpi.pbs
cp /home/datawork-croco/datarmor-only/CONFIGS/TUTO20/BASIN_WITH_XIOS/
˓→iodef.xml.Twofiles iodef.xml
Have a look at the iodef.xml file to understand how to simply add a new output file
run the model:
qsub job_croco_mpi.pbs
The boolean true means that croco will run with XIOS in “detached mode”. Each computing
processor will send fields to one or several XIOS servers which will be in charge of writing the
outputs files.
Edit job_croco_mpi.pbs to add one XIOS server
##PBS -l select=1:ncpus=28:mpiprocs=4:mem=8g
#PBS -l select=1:ncpus=28:mpiprocs=5:mem=8g
There will be 4 computing processors sending fields to 1 xios server writting in output files.
Run the model:
qsub job_croco_mpi.pbs
Theorically, computing processors will run faster (keep in mind that reading and writting files is
slow, computing is fast!).
What is the walltime (or real time)?
Is it worth to use detached mode in this case?
Adding an online dignostic using ONLY xios:
In the output file, we need to have a new variable computed from already defined variables. For
instance, we want to have zeta*zeta . . .
Edit field_def.xml and add the new variable zeta2:
Then edit iodef.xml and add the new variable to be written in the output file:
141
Croco Tutorials, Release 1.3
qsub job_croco_mpi.pbs
cd $confs/Run_BENGUELA_LR
cp /home/datawork-croco/datarmor-only/CONFIGS/TUTO20/BENGUELA_LR_XIOS/* .
Compile once:
./jopbcomp
Run :
qsub job_croco_mpi.pbs
TWENTYFOUR
TIPS
1. In case of strange errors during compilation (*e.g. “catastrophic error: could not find . . . ”), try one of these
solutions*
• check your home space is not full ;-)
• check your paths to compilers and libraries (especially Netcdf library)
• check that you have the good permissions, and check that your executable files (configure, make. . . )
do are executable
• check that your shell scripts headers are correct or add them if necessary (e.g. for bash: #!/bin/
bash)
• try to exit/log out the machine, log in back, clean and restart compilation
2. Errors and tips related to netcdf library
• with netcdf 4.3.3.1: need to add the following compilation flag for all models: -mt_mpi
The error associated to a missing -mt_mpi flag is of this type: ”
/opt/intel//impi/4.1.1.036/intel64/lib/libmpi_mt.so.4: could not read symbols: Bad value “
• with netcdf 4.1.3: do NOT add -mt_mpi flag
• with netcdf4, need to place hdf5 library path in your environment:
export LD_LIBRARY_PATH=YOUR_HDF5_DIR/lib:$LD_LIBRARY_PATH
• with netcdf 4, if you use the library splitted in 2: C part and Fortran part, you need to place links
to C library before links to Fortran library and need to put both path in this same order in your
LD_LIBRARY_PATH
3. In case of ‘segmentation fault’ error
• try to allocate more memory with “unlimited -s unlimited”
• try to launch the compilation as a job (batch) with more allocated memory
4. relocation truncated to fit: R_X86_64_32S against symbol at compilation in sequential mode
• add option mcmodel=large in compile options (FLAGS) in jobcomp
5. m2c error at the beginning of the compilation
• the path to OCEAN directory in the jobcomp file may be wrong
143
Croco Tutorials, Release 1.3
you may ask for too many cpus in MPI compared to the size of your grid
• XIOS
Warning: The output time step in XIOS must be a multiple of the 3D time step in croco
Some advices and tips on how to use the model with analytical forcings
1. How to unplug atmospherical forcing
In cppdefs.h you should undefine BULK_FLUX and add some keys
#undef BULK_FLUX
#define ANA_SSFLUX
#define ANA_STFLUX
#define ANA_SMFLUX
#define ANA_SRFLUX
DEFINE ANA_BRY
• Edit analytical.F routine and set your own OBC for zeta,Ubar,Vbar,U,V,T,S
TWENTYFIVE
Note: The MUSTANG learning curve is a steep one. Understanding the documentation
strongly benefits from reading the code itself.
Note: In this tutorial $croco refers to the main directory of CROCO source code
cp -r $croco/MUSTANG/MUSTANG_NAMELIST/ ./MUSTANG_NAMELIST
cp -r $croco/TEST_CASES/ ./TEST_CASES
cp $croco/OCEAN/cppdefs.h .
cp $croco/OCEAN/param.h .
cp $croco/OCEAN/Makefile .
cp $croco/OCEAN/jobcomp .
• Modify your jobcomp to point to the location of your CROCO source code
• Edit the cppdefs.h file, e.g.:
# define DUNE
# define MUSTANG
Make sure MUSTANG is activated. For some test cases SEDIMENT (USGS sediment
model) is activated by default in cppdefs.h.
145
Croco Tutorials, Release 1.3
1. First choice: V1 or V2 ?
If you need bedload - you dont’t have the choice:
# define key_MUSTANG_V2
Warning: If you want to add a source of sand (e.g. rivers) with the pseudo-2D scheme, it has not been
tested yet. Most probably your discharge will only be a fraction of what you wanted. You will need
to either adjust the concentration or to modify step3D_t.F in the following section to sum up the water
column fluxes in the bottom layer:
!----------------------------------------------------------
! Apply point sources for river runoff simulations
!----------------------------------------------------------
# define WAVE_OFFLINE
Activates the reading of wave data (this is an existing CROCO CPP option). If com-
bined with #define MUSTANG, it reads significant wave height, wave period, wave direc-
tion and bottom orbital velocity. Then the wave-induced bottom shear stress is computed in
sed_MUSTANG_CROCO.F90. Note that the significant wave height (or wave amplitude) has to
be given as for now but is not used to compute the bed shear stress.
Header of an example wave file:
dimensions:
wwv_time = UNLIMITED ; // (2586 currently)
eta_rho = 623 ;
xi_rho = 821 ;
variables:
double wwv_time(wwv_time) ;
double hs(wwv_time, eta_rho, xi_rho) ;
hs:_FillValue = -32767. ;
double t01(wwv_time, eta_rho, xi_rho) ;
t01:_FillValue = -32767. ;
double dir(wwv_time, eta_rho, xi_rho) ;
dir:_FillValue = -32767. ;
double ubr(wwv_time, eta_rho, xi_rho) ;
ubr:_FillValue = -32767. ;
# define PSOURCE_NCFILE
# define PSOURCE_NCFILE_TS
dimensions:
qbar_time = 7676 ;
n_qbar = 6 ;
runoffname_StrLen = 30 ;
temp_src_time = 8037 ;
salt_src_time = 8037 ;
MUD1_src_time = 7676 ;
variables:
double qbar_time(qbar_time) ;
qbar_time:long_name = "runoff time" ;
qbar_time:units = "days" ;
qbar_time:cycle_length = 0 ;
qbar_time:long_units = "days since 1900-01-01" ;
double Qbar(n_qbar, qbar_time) ;
Qbar:long_name = "runoff discharge" ;
Qbar:units = "m3.s-1" ;
char runoff_name(n_qbar, runoffname_StrLen) ;
double temp_src_time(temp_src_time) ;
temp_src_time:cycle_length = 0 ;
temp_src_time:long_units = "days since 1900-01-01" ;
double salt_src_time(salt_src_time) ;
salt_src_time:cycle_length = 0 ;
(continues on next page)
And then, the fraction of each sediment variable in the seafloor is defined with
cini_sed_n() in parasubsance_MUSTANG.txt
• Read the sediment cover from a netcdf file or restart from a RESTART file
In paraMUSTANG*.txt:
l_repsed=.true. ! boolean set to .true. if
˓→sedimentary variables are initialized from a previous run
The netcdf file needs to have the concentration values under the names NAME_sed,
with NAME corresponding to the names defined in the SUBSTANCE input files.
The number of vertical levels (ksmi, ksma) and the layer thickness (DZS) also need
to be defined. The file structure is similar to the RESTART netcdf file, and filerepsed
can be used to restart from a CROCO RESTART file.
Header of an example sediment cover file:
dimensions:
ni = 821 ;
nj = 623 ;
time = UNLIMITED ; // (1 currently)
level = 10 ;
variables:
double latitude(nj, ni) ;
double longitude(nj, ni) ;
(continues on next page)
Alternatively there is a 3rd option possible. If l_repsed=.false. and l_unised=.false., you can
specify the filename of your sediment cover dataset (fileinised), but then it is up to you to write
yourself the piece of code to read it in initMUSTANG.F90 in the subroutine MUSTANG_sedinit.
How to prescribe the concentration for the initialisation :
• Uniform sediment cover. If you use a uniform sediment cover, the initial fraction
of each sediment class is read in parasubastance_MUSTANG.txt. Then the concentra-
tion of each sediment class is a fraction of cseduni defined in paraMUSTANG.txt (i.e.
cv_sed(iv)=cini_sed_n(iv) x cseduni). However, since you prescribe cseduni, it is not neces-
sarily similar to what the model total concentration should be for the same sediment mixture,
unless you used the same porosity model as in MUSTANG to compute cseduni.
With MUSTANG V2, after initialisation, the sediment concentration is adjusted
in every layers to match the model porosity law. Hence the initial mass is not
preserved, but the bed height and the sediment class fractions are.
With MUSTANG V1, by default the sediment concentration is not adjusted. In this
case, what will happen is that the first time erosion happens, the very first deposit
could have a very different porosity to the initial state, and induce an abrupt bed
height change. You can select l_init_hsed=.true. to bypass this issue. While adjust-
ing the sediment concentration, it will also adjust the sediment height to conserve
the initial mass.
• RESTART. If you use l_repsed=.true., l_init_hsed is not even read. In V1, the sediment
concentrations that you specify will not be overwritten. It means that you have to start
with concentrations that follow the porosity law of the model. In V2, concentrations are
overwritten in all layers after computing the porosity for the sediment mixture. In this cas
you can specify concentration that are just a fraction of an arbitrary constant total sediment
concentration.
TWENTYSIX
Warning: This is specific to DATARMOR cluster used for this training; if you are working on you own
computer, follow the System Requirements and Downloading the code tutorials to download the code, and
set-up your environment
An environment script has been created for this training on DATARMOR. It will load the necessary modules and
set some useful paths and environment variables. Copy this croco_env.csh script and source it. If you already have
a .cshrc or .tcshrc or .bashrc environment script, please copy it to .chsrc.bck to avoid overdefinitions and use only
croco_env.csh during the training period.
cd $HOME
cp /home/datawork-croco/datarmor-only/TRAININGS/TRAINING_2019/croco_env.* .
source croco_env.csh
Now the $CROCO_DIR environment variable is defined and you will find useful material for this training in this
directory.
cd $work
mkdir TRAINING_2019
cd TRAINING_2019
mkdir croco
mkdir CONFIGS
cp -r $CROCO_DIR/SOURCE_CODES/CROCO/croco_git/croco croco/.
cp -r $CROCO_DIR/SOURCE_CODES/CROCO/croco_git/croco_tools croco/.
If you have followed this architecture, the following environment variables have also been placed to facilitate
navigation:
• $croco point to your croco sources: $work/TRAINING_2019/croco/croco
• $tools point to your croco sources: $work/TRAINING_2019/croco/croco_tools
• $confs point to your croco sources: $work/TRAINING_2019/CONFIGS
Investigate by your own the various directories.
151
Croco Tutorials, Release 1.3
Warning: do not modify any of the files contained in your source directories $croco and $tools to keep
your source files clean; modifications should be perfomed in your configuration directories (as we will see
later)
Datasets for preparing surface and boundary conditions from climatological dataset can be downloaded on
CROCO website. For this training you will find them in $CROCO_DIR/DATA/DATASETS_CROCOTOOLS ;
otherwise see the Download tutorial.
You can also find the following global atmospheric reanalysis in $CROCO_DIR/DATA/METEOROLOGICAL_FORCINGS/:
• ERAI
• CFSR
And the following ocean reanalysis in $CROCO_DIR/DATA/3D_OCEAN_FORCING:
• SODA
• ECCO2
cp -R /home/datawork-croco/datarmor-only/CONFIGS/TUTO20/BASIN_NO_XIOS/*
˓→$confs/BASIN
cd $confs/BASIN
For DATARMOR training, OASIS has already been compiled, so you can just copy the sources and compiled files
mkdir -p $work/TRAINING_2019/oasis
cp -r $CROCO_DIR/SOURCE_CODES/OASIS/OASIS3-MCT_3.0_branch_compiled $work/
˓→TRAINING_2019/oasis/OASIS3-MCT_3.0_branch
The configure file for compiling OASIS on DATARMOR, named make.datarmor can be found here
$CROCO_DIR/make.datarmor
For DATARMOR training, WRF has been compiled, and you can just copy the source and compiled files:
mkdir -p $work/TRAINING_2019/wrf
cp -r $CROCO_DIR/SOURCE_CODES/WRF/WRFV3.7.1_compiled $work/TRAINING_2019/wrf/WRFV3.
˓→7.1
cp -r $CROCO_DIR/SOURCE_CODES/WRF/WPSV3.7.1 $work/TRAINING_2019/wrf/.
mkdir -p $work/TRAINING_2019/ww3
cp -r $CROCO_DIR/SOURCE_CODES/WW3/github/WW3_compiled/* $work/TRAINING_2019/ww3/.
cp $CROCO_DIR/SOURCE_CODES/TOY/toy_compiled/toy_model $confs/Run_BENGUELA_LR_
˓→cpl/.
cp $CROCO_DIR/DATA/BENGUELA_CPL/toy_files/* $confs/Run_BENGUELA_LR_cpl/.
You should now have the following new files in your configuration directory:
• toy_model
• grid_wav.nc
• TOYNAMELIST.nam
• toy_wav.nc
An example of fulfilled namcouple is also provided in $CROCO_DIR/DATA/BENGUELA_CPL/oasis_files
TWENTYSEVEN
IFREMER SPECIFIC
This tutorial is written in the Framework of the supercomputer (DATARMOR) located at Ifremer. It’s also a guide
for those who are working with MARS3D model and who want to make their configurations with CROCO
Warning: This is specific to DATARMOR cluster used for this training; if you are working on you own
computer, follow the System Requirements and Downloading the code tutorials to download the code, and
set-up your environment
An environment script has been created for this training on DATARMOR. It will load the necessary modules and
set some useful paths and environment variables. Copy this croco_env.csh script and source it. If you already have
a .cshrc or .tcshrc or .bashrc environment script, please copy it to .chsrc.bck to avoid overdefinitions and use only
croco_env.csh during the training period.
cd $HOME
cp /home/datawork-croco/datarmor-only/TRAININGS/TRAINING_2021/croco_env.* .
source croco_env.csh
Now the $CROCO_DIR environment variable is defined and you will find useful material for this training in this
directory.
cd $work
mkdir TRAINING_2021
cd TRAINING_2021
mkdir croco
mkdir CONFIGS
cp -r $CROCO_DIR/../../SOURCE_CODES/CROCO/croco_git/croco_master/croco croco/.
cp -r $CROCO_DIR/../../SOURCE_CODES/CROCO/croco_git/croco_tools croco/.
If you have followed this architecture, the following environment variables have also been placed to facilitate
navigation:
• $croco point to your croco sources: $work/TRAINING_2021/croco/croco
155
Croco Tutorials, Release 1.3
Warning: do not modify any of the files contained in your source directories $croco and $tools to keep
your source files clean; modifications should be perfomed in your configuration directories (as we will see
later)
BASIN
The VILAINE case is an example of a realistic coastal configuration taking into account :
• Tidal circulation
• Wet/dry areas
• River outflows
• Sediment dynamic with MUSTANG
The configuration is included in CROCO as a reference coastal case (see cppdefs.h)
1. Set the environment
source ~/croco_env.sh
mkdir $confs/VILAINE
cd $confs/VILAINE
cp $croco/OCEAN/cppdefs.h .
cp $croco/OCEAN/param.h .
cp $croco/OCEAN/jobcomp .
# define COASTAL
# undef REGIONAL
You can also explore the CPP options selected for VILAINE case.
• which physical parametrizations ?
• which advection schemes ?
You can check the VILAINE settings in param.h:
• Dimension of the grid ?
• Number of vertical levels ?
5. Edit the compilation script jobcomp:
see BASIN
6. Get the inputs files for the run
cp /home/datawork-croco/public/ftp/CONFIGS_EXAMPLES/VILAINE/croco.in .
cp -r /home/datawork-croco/public/ftp/CONFIGS_EXAMPLES/VILAINE/CROCO_FILES
˓→.
Take a look of the input files in CROCO_FILES and check if it’s filled out correctly in croco.in
file
7. Get the namelist for MUSTANG module
cp -r /home/datawork-croco/public/ftp/CONFIGS_EXAMPLES/VILAINE/MUSTANG_
˓→NAMELIST .
# define MPI
cp $CROCO_DIR/batch_comp_datarmor .
qsub batch_comp_datarmor
cp $CROCO_DIR/job_croco_mpi.pbs .
qsub job_croco_mpi.pbs
cp $croco/OCEAN/scalars.h .
spval=999.
define FILLVAL
• add this key in cppdefs.h` to not add bathymetry on wet dry cells
define ZETA_DRY_IO
mkdir BMG
cp /home/datawork-mars/TOOLS/BATHY/BMGTOOLS/bmg-linux64b-rev1489.tar.gz .
tar -xzvf bmg-linux64b-rev1489.tar.gz
cd create_bmg-5.0.0
./CreateBMG.sh
/home/datawork-croco/datarmor-only/DATA/COASTLINE/BMGTOOLS_FORMAT/france.line
/home/datawork-croco/datarmor-only/DATA/COASTLINE/BMGTOOLS_FORMAT/europa.
˓→closed.line
/home/datawork-croco/datarmor-only/DATA/COASTLINE/BMGTOOLS_FORMAT/med_sea.line
/home/datawork-mars/TOOLS/BATHY/INTERP/interp_bathy/INTERP_BATHY.exe
/home/datawork-mars/TOOLS/BATHY/INTERP/interp_bathy/namelist
/home/datawork-mars/TOOLS/BATHY/INTERP/interp_bathy/batch_interp
• Build a text file which list the MNT files you want to use and pickup from here
cat catalog.dat :
/home/datawork-croco/datarmor-only/DATA/MNT_HOMONIM/MNT_ATL100m_HOMONIM_
˓→WGS84_NM.nc
&interp_soundings2grid
coastfile='/home/datawork-croco/datarmor-only/DATA/TDC/france.line'
&interp_grid2grid
data_catalog='catalog.cat'
l_bathy_bmg=.true.
l_closed_line=.true.
landvalue=-999
grid_file='RootGrid.nc'
nivmoypath=''
(continues on next page)
qsub batch_interp
8. Open in another terminal the Check BMG module to view and edit (if needed) your bathymetry
• Open
cd check_bmg-5.0.0
./CheckBMG.sh
Warning: dont forget to save your project to take into acount your modifications
cp -r /home/datawork-croco/datarmor-only/TRAININGS/TRAINING_2021/
˓→MARS2CROCO/BATHY .
cd BATHY
ncview croco_grd.nc
To get tide on your OBC, you need an atlas with harmonic constituents on your model grid
croco_grd.nc * First you also need the following script
cp /home/datawork-croco/datarmor-only/TRAININGS/TRAINING_2021/MARS2CROCO/
˓→TIDES/convert_fes2croco.py .
cp /home/datawork-croco/datarmor-only/TRAININGS/TRAINING_2021/MARS2CROCO/
˓→TIDES/tides.txt .
cp /home/datawork-croco/datarmor-only/TRAININGS/TRAINING_2021/batch_
˓→python .
• Edit the python script to set the list of constituents you want in your Atlas
• The script need the croco_grd.nc file so you need to link it the directory
ln -s ../BATHY/croco_grd.nc .
qsub batch_python
This part deals with generation of OBC and IC for your grid, from an Ocean General Circulation Model (exemple
:MERCATOR, HYCOM ..)
1. First you have to get the numerical solution which covers your grid and your period of simulation
/home/datawork-croco/datarmor-only/DATA/MERCATOR_SOLUTION
2. The second step is to interpolate this file on your grid. We use a fortran programm for this :
• Get the following directory
cp -r /home/datawork-croco/datarmor-only/TRAININGS/TRAINING_2021/EXTRACT_
˓→CROCO .
cd EXTRACT
vi IN/namelist
&extractmode
l_extract_obc=.false. ! compute Open Boundaries Conditions (set to
˓→true)
• Change the name of the input file with the MERCATOR file
&namcoarse
file_coarse = 'MERCATOR_PSY2V4.nc' ! name of file containing data to be
˓→interpolated
• In this section set the name of init and obc file, input bathy file and activate which obc you want to
extract
&namfine
head_fine = 'head.useless' ! head.CONF file (a line of the one
˓→used in MARS)
file_bathy_fine = 'bathy_rang1_2500_final.nc'
Z=(0:immersion1:dh1_ref,immersion1:immersion2:dh2_ref,
˓→immersion2:immersion3:dh3_ref)
¶m_interp
l_interpxyz = .false. !
rapdist = 6.0 ! only if l_interpxyz=.true. must be
˓→<= 6
qsub batch_extract
ssh -X login@datarmor
1. Setup environment
• Source this file to set some environment variables
cd $HOME
cp /home/datawork-croco/datarmor-only/TRAININGS/TRAINING_2021/croco_env.* .
source croco_env.csh
cd $work
mkdir TRAINING_2021
cd TRAINING_2021
mkdir croco
mkdir CONFIGS
cp -r $CROCO_DIR/../../SOURCE_CODES/CROCO/croco_git/croco_master/croco croco/.
cp -r $CROCO_DIR/../../SOURCE_CODES/CROCO/croco_git/croco_tools croco/.
cd $confs
mkdir my_config
cd my_config
mkdir CROCO_FILES
cp $croco/OCEAN/param.h .
cp $croco/OCEAN/cppdefs.h .
cp $croco/OCEAN/jobcomp .
cp $CROCO_DIR/job* .
1. Setup param.h
• First section : define your domain dimensions (get xi_rho and eta_rho from croco_grd.nc)
#ifdef MPI
integer NP_XI, NP_ETA, NNODES
parameter (NP_XI=14, NP_ETA=6, NNODES=NP_XI*NP_ETA)
#ifdef WET_DRY
real D_wetdry ! Critical Depth for Drying cells
# else
parameter (D_wetdry=0.4)
#else
parameter (Ntides=114)
/* Configuration Name */
# define MEDI5KM
# define MPI
• Activate TIDES and define Open boundary conditions according to your domain
/* Vertical Mixing */
# define GLS_MIXING
/* Surface Forcing */
# undef BULK_FLUX
/* Suppression des termes atmospheriques */
# define ANA_SSFLUX /* analytical salinity flux */
(continues on next page)
– Lateral Forcing
/* Lateral Forcing */
# undef CLIMATOLOGY
# define ANA_INITIAL
# define ANA_BRY
# define FRC_BRY
# ifdef FRC_BRY
# define Z_FRC_BRY
# define M2_FRC_BRY
# undef M3_FRC_BRY
# undef T_FRC_BRY
# endif
– Bottom Forcing
/* Bottom Forcing */
# define ANA_BSFLUX
# define ANA_BTFLUX
– Desactivate source
/* Point Sources - Rivers */
# undef PSOURCE
# undef PSOURCE_NCFILE
# ifdef PSOURCE_NCFILE
# define PSOURCE_NCFILE_TS
# endif
start_date:
01-01-1900 00:00:00
• Set the path to your inputs files which should be in a CROCO_FILES directory
grid: filename
CROCO_FILES/croco_grd.nc
forcing: filename
CROCO_FILES/croco_frc_manga16.nc
bulk_forcing: filename
CROCO_FILES/bidon.nc
climatology: filename
CROCO_FILES/croco_clm.nc
boundary: filename
CROCO_FILES/croco_bry.nc
initial: NRREC filename
-1
CROCO_FILES/croco_ini.nc
restart: NRST, NRPFRST / filename
9000 -2
CROCO_FILES/croco_rst.nc
history: LDEFHIS, NWRT, NRPFHIS / filename
T 90 0
CROCO_FILES/croco_his.nc
averages: NTSAVG, NAVG, NRPFAVG / filename
1 2140 0
CROCO_FILES/croco_avg.nc
• Choose which variables you want to save in your output (T/F to activate/desactivate)
F F F F F F F F F F F
˓→ F F F F F 10*F
gls_averages: TKE GLS Lscale
F F F
• Set lateral viscosity (ONLY if UV_VIS2 or UV_VIS4 cpp key are enabled)
• Set lateral diffusivity (ONLY if UV_DIFF2 or UV_DIFF4 cpp key are enabled)
#PBS -q mpi_3
#PBS -l mem=8gb
#PBS -l walltime=10:00:00
#PBS -N CROCO_SEINE
qsub batch_mpt
7. Visualize
• First use ncview
# define BULK_FLUX
# ifdef BULK_FLUX
# define BULK_FAIRALL
# undef BULK_LW
# undef BULK_EP
# define BULK_SMFLUX
# ifdef BULK_SMFLUX
# define BULK_SM_UPDATE
# endif
# undef SST_SKIN
# undef ANA_DIURNAL_SW
# define ONLINE
# define AROME
# define READ_PATM
# undef ERA_ECMWF
# undef RELATIVE_WIND
# else
# undef QCORRECTION
# undef SFLX_CORR
# undef ANA_DIURNAL_SW
# endif
• First you need to get your IC and OBC files see 3D Initial and Boundary conditions
• Edit cppdefs.h to activate BRY conditions
undefine analytical Init and boundary conditions activate BRY for Tracers (T_FRC_BRY)
# undef ANA_INITIAL
# undef ANA_BRY
# define FRC_BRY
# ifdef FRC_BRY
# define Z_FRC_BRY
# define M2_FRC_BRY
# undef M3_FRC_BRY
# undef T_FRC_BRY
# endif
boundary: filename
CROCO_FILES/croco_bry.nc
initial: NRREC filename
-1
CROCO_FILES/croco_ini.nc
Note: When you start from an init file the start date of your simulation is the date of the file
module avail
module load ferret/7.1__64b
Launch it by typing
ferret
or
you will get a list of all the variables contained in the fill loaded and their dimensions :
• i : designate the x dimension
• j : designate the y dimension
• k : designate the vertical dimension
• l : designate the temporal dimension
4. List the numerical values of a section of a given variable :
From now on let’s consider the variable temp which gets four dimensions (time + x,y,z).
This will list all the numerical data at the level 40, i=10, j=10 of the variable temp.
5. Plot a one dimensionnal feature by fixing n-1 of the variable dimension number (n).
this will plot the vertical profile of the variable temp at time l=4.
the same as the previous but with superimposition of two profile at two different instants (l=10 and l=40)
6. Plot a two dimensionnal features by fixing n-2 of the variable dimension number (n).
In case, k=40 designate the surface layer, this will plot a hovemuller diagram along all longitudes vs time.
The same as the previous one but setting a color bar that extends from 10 to 20 with bins of 1.
The same as the previous one but extending the first and last color class respectively down to 0 and up to
30.
go to URL https://gitlab.inria.fr/croco-ocean/croco
click on the upright corner **register** tab
then on the right side of the page on the highlight register
confirm your registration with the mail you received
go back https://gitlab.inria.fr
log in
search croco project
select the croco projet and then ask for an access
2. Once you get the access create your own local repository
mkdir /homeX/datahome/login/croco/
cd croco
git init
git clone [email protected]:croco-ocean/croco.git
git branch -v -a
4. Create your own local branch (tutu) from a given remote branch (toto)
git status
7. Toto
8. Log
git log
cp /home/datawork-croco/datarmor-only/FORMATION/SRC/croco/XIOS/*.xml .
cp /home/datawork-croco/datarmor-only/FORMATION/SRC/croco/XIOS/*.xml_full .
cp /home/datawork-croco/datarmor-only/FORMATION/SRC/croco/XIOS/send_xios_diags.
˓→F .
[YU2003] Yu, J., & Slinn, D. N. (2003). Effects of wave-current interaction on rip currents. Journal of
Geophysical Research: Oceans, 108(C3).
[THORNTON1983] Thornton, E.B. & R.T. Guza, 1983: Transformation of wave height distribution, J. Geophys.
Res. 88, 5925-5938.
[UCHYIAMA2009] Uchiyama, Y., McWilliams, J. C., & Restrepo, J. M. (2009). Wave-current interaction in
nearshore shear instability analyzed with a vortex force formalism. Journal of Geophysical Re-
search: Oceans, 114(C6).
[WEIR2011] Weir, B., Uchiyama, Y., Lane, E. M., Restrepo, J. M., & McWilliams, J. C. (2011). A vortex
force analysis of the interaction of rip currents and surface gravity waves. Journal of Geophysical
Research: Oceans, 116(C5).
175