0% found this document useful (0 votes)
84 views1 page

Scheduler Commands Cheatsheet-2020-Ally

This document provides a cheat sheet comparing common commands and job specifications for several job scheduling systems including PBS/Torque, Slurm, LSF, SGE, and Loadleveler. It lists user commands for job submission, deletion, and status checks as well as environment variables and directives for configuring queue, resources, notifications, and other job properties.

Uploaded by

ravi_4908
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
84 views1 page

Scheduler Commands Cheatsheet-2020-Ally

This document provides a cheat sheet comparing common commands and job specifications for several job scheduling systems including PBS/Torque, Slurm, LSF, SGE, and Loadleveler. It lists user commands for job submission, deletion, and status checks as well as environment variables and directives for configuring queue, resources, notifications, and other job properties.

Uploaded by

ravi_4908
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1

Schedule Commands Cheat Sheet

User Commands PBS/Torque Slurm LSF SGE Loadleveler


Job submission qsub [script file) sbatch [script file) bsub [script file) qsub [script file) llsubmit [script file)
Job deletion qdel uob idl scancel fiob idl bkill Uob idl qdel uob idl llcancel fjob idl
Job status (b)'job) qstat uob id] squeue uob id] bjobs uob id] qstat -u \ * [-j job id] llq -u [username]
Job status (b}' user} qstat -u [user name] squeue -u [user name] bjobs -u [user name) qstat [-u user name) llq -u [user name)
Job hold qhold uob id] scontrol hold uob id] bstop uob id] qhold uob id] llhold -r uob id]
Job release qrls uob id] scontrol release uob id] bresume uob id] qrls uob id] llhold -r uob id]
Queue list qstat -Q squeue bqueues qconf -sql IIclass
Node list pbsnodes -I sinfo -N OR scontrol show nodes bhosts qhost llstatus -L machine
Clusterstatus qstat -a sinfo bqueues qhost -q llstatus -L cluster
GUI xpbsmon sview xlsf OR xlsbatch qmon xload
Environment PBS/Torque Slurm LSF SGE Load le veler
Job ID $PBS JOBID $SLURM JOBID $LSB JOBID $JOB ID $LOAD STEP ID
Submit Directory $PBS O WORKDIR $SLURM SUBMIT DIR $LSB SUBCWD $SGE 0 WORKDIR $LOADL STEP INITDIR
Submit Host $PBS O HOST $SLURM SUBMIT HOST $LSB SUB HOST $SGE O HOST N/A
Node List $PBS NODEFILE $SLURM JOB NODELIST $LSB HOSTS/LSB MCPU HOST $PE HOSTFILE $LOADL PROCESSOR LIST
Job Array Index $PBS ARRA YID $SLURM ARRAY TASK ID $LSB JOBINDEX $SGE TASK ID
Job Specification PBS/Torque Slurm LSF SGE Loadleveler
Script directive #PBS #SBATCH #BSUB #$ #@
Queue -q [queue] -p [queue) -q [queue] -q [queue) class = [queue)
Node Count -I nodes= [count] -N [min[-max]) -n [count] N/A node=[count]
CPU count -I ppn= [count] OR -n [count] -n [count] -pe [PE) [count] N/A
-I mppwidth= [PE count]
Wall Clock Limit -I walltime =[hh:mm:ss) -t [min] OR -t [days-hh:mm:ss) -W [hh:mm:ss) -I h_rt= [seconds) wall clock limit= [hh:mm:ss]
Standard Output File -o [file name) -o [file name] -o [file name] -o [file name) output= [file name)
Standard Error File -e [file name] -e [file name] -e [file name] -e [file name) error= [file name)
Combine stdout/err -j oe (both to stdout) OR (use -o without -e) (use -o without -e) -j yes N/A
-j eo (both to stderr)
Copy Environment -V --export= [ALL I NONE !variables) N/A -V Environment=COPY ALL
Event Notification -m abe --mail-type =[events] -B or -N -m abe notification= startIerrorI complete IneverIalways
EmailAddress -M [address) --mail-user =[address) -u [address] -M [address] Notify_ user=[address I
Job Name -N [name) --job-name= [name) -J [name) -N [name) job name =[name)
Job Restart -r [vi n] --requeue OR --no-requeue -r -r [yes I no] Restart = [yes I no]
(NOTE: configurable default)
Working Directory N/A --workdir= [dir_name) (submission directory) -wd [directory] initiald ir=[directory]
Resource Sharing -I naccesspolicy= singlejob --exclusive OR --shared -x -I exclusive node_usage=not_shared

Memory Size -I mem = [MB] --mem= [mem)[MIGIT] OR -M [MB) -I mem_free = [memory][K IMIG) requirements= (Memory> = [MB])
--mem-per-cpu= [mem][MIGIT]
Accou ntto charge -W group_list= [account] --account= [account] -P [account] -A [account] n/a

Tasks Per Node -I mppnppn [PEs_per node] --tasks-per-node = [count] N/A (Fixed allocation_rule in PE) tasks per node =[count]
CPUs Per Task n/a --cpus-per-task= [count) n/a n/a n/a
Job Dependency -d [job id] --depend =[state:job id] -w [donelexitlfinish) -holdjid [job id I job name] n/a
Job Project n/a --wckey = [name] -P [name) - P [name] n/a
Job host preference n/a --nodelist= [nodes)AND/OR -exclude = [nodes) -m [nodes) -q[queue)@[node) OR -q [queue)@@[hostgroup] n/a
Quality of Service -I qos = [name) --qos =[name] n/a n/a n/a

Job Arrays -t [array_spec) --array= [array_spec) (Siurm version 2.6+) J "name[array_ spec]" -t [array_ spec) n/a

Generic Resources -I other= [resource_spec) --gres= [resource_spec I n/a -I [resource] = [value) n/a

Licenses n/a --licenses= [license_spec) -R "rusage[license_spec)" -I [license] =[count] n/a

Begin Time -A "YYYY-MM-DD HH:MM:SS" --begin= vvvY-MM-DD[THH:MM[:SS)) -b[[year:][month:] day:]hour:minute -a [YYMMDDhhmm) n/a

You might also like