Job Submission On CC-IN2P3 GPU Farm: April 2019
Job Submission On CC-IN2P3 GPU Farm: April 2019
Interactive
node
qlogin qsub qlogin qsub
GPU Farm
K80 V100
= 2 gpus = 1 gpu
SOFTWARE
STORAGE
GPU FARM
USER
DATA STORAGE
STORAGE
/pbs/throng
/sps
K80
qlogin -l GPU=1,sps=1,GPUtype= -q mc_gpu_interactive -pe multicores_gpu 4
V100
K80
qsub -l GPU=1,sps=1,GPUtype= [ options ] <file_to_execute>
V100
Custom parameters
Environment ( -pe )
Multicores (1 node) Parallel (multinodes) K80 only!!!
openmpigpu_2 x (with x = 2 * nb of nodes)
multicores_gpu 4
openmpigpu_4 x (with x = 4 * nb of nodes)
Misc.
Output file path: -o Passing environment vars: -V
Error file path: -e
Installed libraries
Updates (n, n-1)
GPU Jobs: https://doc.cc.in2p3.fr/en:jobs_gpu
Custom environment
Execute your job on a custom environment via Singularity
Singularity will give you the opportunity to execute an image with the
right pieces of software installed (i.e CUDA 10.0 library in this case)
This software flexibility is of course possible as soon as it is still
compatible with workers hardware
One can also create and use its own images which brings maximum
flexibility to the farm (see you @ CC Singularity Training Course)
Job Submission on CC-IN2P3 GPU Farm 04/2019 9
CC-IN2P3 Singularity Image Catalog
Singularity Images:
/cvmfs/singularity.in2p3.fr/images/HPC/GPU
/pbs/throng
batch_launcher.sh
#!/bin/bash
# executed on the worker
/bin/singularity exec --nv --bind /sps:/sps --bind /pbs:/pbs <image_path> <path_to>/start.sh
start.sh
#!/bin/bash
# executed on the worker, inside the singularity image
source <path_to_python_env> activate <env>
python <path_to>/program.py
Questions ?