0% found this document useful (0 votes)
146 views4 pages

WRF v3.8 Installation Best Practices

WRF v3.8 Installation Best Practices

Uploaded by

israelmp
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
146 views4 pages

WRF v3.8 Installation Best Practices

WRF v3.8 Installation Best Practices

Uploaded by

israelmp
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

WRF Installation

Best Practices
BEST PRACTICES

1. Introduction: • benchmark workload

The following best practices document is provided as 5. Installation


courtesy of the HPC Advisory Council.
5.0 Building OpenMPI using Intel Compiler 2016
2. Application Description: ---
The Weather Research and Forecasting (WRF) Model is a source /opt/intel/compilers_and_libraries_2016.1.150/
next-generation mesoscale numerical weather prediction linux/bin/compilervars.sh intel64
system designed to serve both operational forecasting and
atmospheric research needs. It features multiple dynamical
cores, a 3-dimensional variational (3DVAR) data assimilation export CC=icc
system, and a software architecture allowing for computa- export CXX=icpc
tional parallelism and system extensibility. WRF is suitable
for a broad spectrum of applications across scales ranging export FC=ifort
from meters to thousands of kilometers. export F90=ifort
3. Version Information:
Download WRF 3.8 at: cd openmpi-1.10.2
http://www2.mmm.ucar.edu/wrf/src/WRFV3.8.TAR.gz make clean
Download WRF benchmarks at:
http://box.mmm.ucar.edu/wrf/WG2/benchv2 ./configure --prefix=${HPCX_HOME}/ompi-v1.10-i16
--with-knem=${HPCX_HOME}/knem \
4. Prerequisites: --with-fca=${HPCX_HOME}/fca --with-
4.1 Hardware: mxm=${HPCX_HOME}/mxm \
The instructions from this best practice have been tested --with-hcoll=${HPCX_HOME}/hcoll \
on the HPC Advisory Council, Dell™ PowerEdge™ R730
--with-platform=contrib/platform/mellanox/opti-
32-node cluster
mized
• Dual Socket Intel® Xeon® 14-core CPUs E5-2697
V3 @ 2.60 GHz
make -j 16 all
• Mellanox ConnectX-4 EDR 100Gb/s InfiniBand
adapters make -j 16 install
• Mellanox Switch-IB SB7700 36-Port 100Gb/s EDR ---
InfiniBand switches # Create a module file for openmpi
4.2 Software: % cd /opt/hpcx-v1.5.370-icc-MLNX_OFED_LINUX-3.2-
a. OS: Red Hat Enterprise Linux 6.5+ 2.0.0.0-redhat6.5-x86_64/modulefiles
b. Compilers: Intel compilers 2016 % cp hpcx-ompi-v1.10 hpcx-ompi-v1.10-i16
c. MPI: hpcx-v1.5.370-icc % sed “s/ompi-v1.10/ompi-v1.10-i16/g” –i hpcx-ompi-
v1.10-i16
d. Other:
• hdf5-1.8.16
5.1 Building hdf5
• netcdf-4.4.0
Download hdf5 from http://www.hdfgroup.org/ftp/HDF5/
• netcdf-fortran-4.4.3
current/src/hdf5-1.8.16.tar.gz.
• parallel-netcdf-1.7.0
© Copyright 2016. HPC Advisory Council. All rights reserved.
BEST PRACTICES

--- export FLDFLAGS=’-fPIC’


source /opt/intel/compilers_and_libraries_2016.1.150/ export F90LDFLAGS=’-fPIC’
linux/bin/compilervars.sh intel64
export LDFLAGS=’-fPIC’
module use /opt/hpcx-v1.5.370-icc-MLNX_OFED_
LINUX-3.2-2.0.0.0-redhat6.5-x86_64/modulefiles
./configure --prefix= /application/tools/i16/ parallel-
module load hpcx-ompi-v1.10-i16
netcdf-1.7.0/install-hpcx --enable-fortran --enable-large-
file-test
export CC=mpicc make -j 28 install
export CXX=mpic++
export FC=mpif90 5.3 Building netcdf-C and netcdf-Fortran
export F90=mpif90 Download netcdf-C and netcdf-Fortran from http://www.
unidata.ucar.edu/downloads/netcdf/index.jsp.

./configure --prefix= /application/tools/i16/hdf5-1.8.16/ --- netcdf-C


install-hpcx --enable-parallel --enable-shared source /opt/intel/compilers_and_libraries_2016.1.150/
make -j 28 install linux/bin/compilervars.sh intel64
module use /opt/hpcx-v1.5.370-icc-MLNX_OFED_
LINUX-3.2-2.0.0.0-redhat6.5-x86_64/modulefiles
5.2 Building parallel-netcdf
module load hpcx-ompi-v1.10-i16
Download parallel-netcdf from http://cucis.ece.northwest-
ern.edu/projects/PnetCDF/download.html.
--- export CC=mpicc

source /opt/intel/compilers_and_libraries_2016.1.150/ export CXX=mpicxx


linux/bin/compilervars.sh intel64 export FC=mpif90
module use /opt/hpcx-v1.5.370-icc-MLNX_OFED_ export F77=mpif90
LINUX-3.2-2.0.0.0-redhat6.5-x86_64/modulefiles
export F90=mpif90
module load hpcx-ompi-v1.10-i16

export OMPI_MPICC=icc
export CC=mpicc
export OMPI_MPICXX=icpc
export CXX=mpicxx
export OMPI_MPIFC=ifort
export FC=mpif90
export OMPI_MPIF90=ifort
export F77=mpif90
export F90=mpif90
HDF5=/application/tools/i16/hdf5-1.8.16/install-hpcx
PNET=/application/tools/i16/parallel-netcdf-1.7.0/install-hpcx
export OMPI_MPICC=icc
export OMPI_MPICXX=icpc
export CPPFLAGS=”-I$HDF5/include -I$PNET/include”
export OMPI_MPIFC=ifort
export CFLAGS=”-I$HDF5/include -I$PNET/include”
export OMPI_MPIF77=ifort
export CXXFLAGS=”-I$HDF5/include -I$PNET/include”
export OMPI_MPIF90=ifort
export FCFLAGS=”-I$HDF5/include -I$PNET/include”
export FFLAGS=”-I$HDF5/include -I$PNET/include”
export CFLAGS=’-g -O2 -fPIC’
export LDFLAGS=”-I$HDF5/include -I$PNET/include
export CXXFLAGS=’-g -O2 -fPIC’ -L$PNET/lib “
export FFLAGS=’-g -fPIC’
export FCFLAGS=’-g -fPIC’ export WRFIO_NCD_LARGE_FILE_SUPPORT=1

© Copyright 2016. HPC Advisory Council. All rights reserved.


2
BEST PRACTICES

./configure --prefix= /application/tools/i16/netcdf-4.4.0/in- export FCFLAGS=”-I$HDF5/include -I$NCDIR/include”


stall-hpcx --enable-fortran --disable-static --enable-shared
export FFLAGS=”-I$HDF5/include -I$NCDIR/include”
--with-pic --enable-parallel-tests -enable-pnetcdf --enable-
large-file-tests --enable-largefile export LDFLAGS=”-I$HDF5/include -I$NCDIR/include
-L$NCDIR/lib “
make
make install
./configure --prefix=$NCDIR --disable-static --enable-
shared --with-pic --enable-parallel-tests --enable-large-
--- netcdf-Fortran file-tests --enable-largefile
module purge
make
source /opt/intel/compilers_and_libraries_2016.1.150/ make install
linux/bin/compilervars.sh intel64
module use /opt/hpcx-v1.5.370-icc-MLNX_OFED_
LINUX-3.2-2.0.0.0-redhat6.5-x86_64/modulefiles
5.4 Building WRF-3.8
module load hpcx-ompi-v1.10-i16
source /opt/intel/compilers_and_libraries_2016.1.150/
linux/bin/compilervars.sh intel64
export CC=mpicc
module use /opt/hpcx-v1.5.370-icc-MLNX_OFED_
export CXX=mpicxx LINUX-3.2-2.0.0.0-redhat6.5-x86_64/modulefiles
export FC=mpif90 module load hpcx-ompi-v1.10-i16
export F77=mpif90
export F90=mpif90 export PHDF5=/application/tools/i16/hdf5-1.8.16/install-
hpcx
export NETCDF=/application/tools/i16/netcdf-4.4.0/install-
export OMPI_MPICC=icc
hpcx
export OMPI_MPICXX=icpc
export PNETCDF=/application/tools/i16/parallel-
export OMPI_MPIFC=ifort netcdf-1.7.0/install-hpcx
export OMPI_MPIF90=ifort export WRFIO_NCD_LARGE_FILE_SUPPORT=1

export WRFIO_NCD_LARGE_FILE_SUPPORT=1 cat <<EOF > answer


67 # 67. (dm+sm) INTEL (ifort/icc): HSW/
HDF5=/application/tools/i16/hdf5-1.8.16/install-hpcx BDW

NCDIR=/application/tools/i16/netcdf-4.4.0/install-hpcx
export LD_LIBRARY_PATH=${NCDIR}/lib:${LD_LI- EOF
BRARY_PATH}
./clean -a
export CPPFLAGS=”-I$HDF5/include -I$NCDIR/include” ./configure < answer
export CFLAGS=”-I$HDF5/include -I$NCDIR/include” rm -f answer
export CXXFLAGS=”-I$HDF5/include -I$NCDIR/include” ./compile -j 32 wrf

© Copyright 2016. HPC Advisory Council. All rights reserved.


3
BEST PRACTICES

6. Running WRF with HPCX else


source /opt/intel/compilers_and_libraries_2016.1.150/ FLAGS+=”-mca coll_hcoll_enable 0 “
linux/bin/compilervars.sh intel64 fi
module use /opt/hpcx-v1.5.370-icc-MLNX_OFED_
LINUX-3.2-2.0.0.0-redhat6.5-x86_64/modulefiles
if [[ “$USE_MXM” == “1” ]]; then
module load hpcx-ompi-v1.10-i16
FLAGS+=”-mca pml yalla “
FLAGS+=”-mca mtl_mxm_np 0 “
export WRFIO_NCD_LARGE_FILE_SUPPORT=1
FLAGS+=”-x MXM_TLS=$TPORT,shm,self “
export LD_LIBRARY_PATH=/application/tools/i16/
netcdf-4.4.0/install-hpcx/lib:$LD_LIBRARY_PATH FLAGS+=”-x HCOLL_ENABLE_MCAST_ALL=1 “

USE_HCOLL=1 else

USE_MXM=1 FLAGS+=”-mca mtl ^mxm “


FLAGS+=”-mca pml ob1 “

FLAGS=”” fi

HCA=mlx5_0
FLAGS+=”-mca btl openib,sm,self “ FLAGS+=”-hostfile <machinefile> “

FLAGS+=”-mca btl_openib_if_include $HCA:1 “ FLAGS+=”-report-bindings “

FLAGS+=”-x MXM_RDMA_PORTS=$HCA:1 “ FLAGS+=”--bind-to core “

FLAGS+=”-mca rmaps_base_dist_hca $HCA:1 “ FLAGS+=”-map-by node “

FLAGS+=”-x HCOLL_MAIN_IB=$HCA:1 “
FLAGS+=”-x HCOLL_IB_IF_INCLUDE=$HCA:1 “ mpirun -np 1024 $FLAGS wrf.exe

FLAGS+=”-mca coll_fca_enable 0 “
6.1 Running WRF using Parallel netcdf

if [[ “$USEKNEM” == “1” ]]; then In the namelist.input, the following settings support
pNetCDF by setting value to 11:
FLAGS+=”-mca btl_sm_use_knem 1 “
io_form_boundary
FLAGS+=”-x MXM_SHM_KCOPY_MODE=knem “
io_form_history
else
io_form_auxinput2
FLAGS+=”-mca btl_sm_use_knem 0 “
io_form_auxhist2
fi

Set nocolons = .true. in the section &time_control of


if [[ “$USE_HCOLL” == “1” ]]; then namelist.input.
FLAGS+=”-mca coll_hcoll_enable 1 “
FLAGS+=”-mca coll_hcoll_np 0 “

350 Oakmead Pkwy, Sunnyvale, CA 94085


Tel: 408-970-3400 • Fax: 408-970-3403
www.hpcadvisorycouncil.com

© Copyright 2016. HPC Advisory Council. All rights reserved.


4

You might also like