This website is deprecated, the old pages are kept online but you should refer in priority to the new web site hpc.uni.lu and the new technical documentation site hpc-docs.uni.lu
Job Scheduling
The UL HPC platform uses two schedulers on the different generation clusters currently in production:
Depending on your computing hardware/software environment requirements, you may need to opt for one or the other clusters:
- iris: cluster information – SLURM scheduler documentation – SLURM examples – SLURM job launchers
- gaia: cluster information – OAR scheduler documentation
- chaos:cluster information – OAR scheduler documentation
- g5k: cluster information – OAR scheduler documentation
Please note that all new computing equipment is added to the iris cluster.
A transition guide from OAR to SLURM is given below.
OAR to SLURM guide
Basic commands:
Command | OAR (Gaia/Chaos clusters) | SLURM (Iris cluster) |
---|---|---|
Submit passive/batch job | oarsub -S [script] |
sbatch [script] |
Start interactive job | oarsub -I |
srun -p interactive --qos debug --pty bash -i |
Queue status | oarstat |
squeue |
User job status | oarstat -u [user] |
squeue -u [user] |
Specific job status (detailed) | oarstat -f -j [jobid] |
scontrol show job [jobid] |
Delete (running/waiting) job | oardel [jobid] |
scancel [jobid] |
Hold job | oarhold [jobid] |
scontrol hold [jobid] |
Resume held job | oarresume [jobid] |
scontrol release [jobid] |
Node list and properties | oarnodes |
scontrol show nodes |
Job specification:
Specification | OAR | SLURM |
---|---|---|
Script directive | #OAR |
#SBATCH |
Nodes request | -l nodes=[count] |
-N [min[-max]] |
Cores request | -l core=[count] |
-n [count] |
Cores-per-node request | -l nodes=[ncount]/core=[ccount] |
-N [ncount] --ntasks-per-node=[ccount] -c 1 OR -N [ncount] --ntasks-per-node=1 -c [ccount] |
Walltime request | -l [...],walltime=hh:mm:ss |
-t [min] OR -t [days-hh:mm:ss] |
Job array | --array [count] |
--array [specification] |
Job name | -n [name] |
-J [name] |
Job dependency | -a [jobid] |
-d [specification] |
Property request | -p "[property]='[value]'" |
-C [specification] |
Environment variables:
Environment variable | OAR | SLURM |
---|---|---|
Job ID | $OAR_JOB_ID |
$SLURM_JOB_ID |
Resource list | $OAR_NODEFILE |
$SLURM_NODELIST #List not file! See note #2 |
Job name | $OAR_JOB_NAME |
$SLURM_JOB_NAME |
Submitting user name | $OAR_USER |
$SLURM_JOB_USER |
Task ID within job array | $OAR_ARRAY_INDEX |
$SLURM_ARRAY_TASK_ID |
Working directory at submission | $OAR_WORKING_DIRECTORY |
$SLURM_SUBMIT_DIR |
- Note #1: each scheduler provides many more environment variables within a job; the list above shows the ones having equivalent purpose. For a comprehensive list of available variables, check out the manual pages of OAR and SLURM or simply run
env | egrep "OAR|SLURM"
within a job. - Note #2: you can easily create a nodefile in the style of OAR, from a SLURM job with
srun hostname | sort -n > hostfile
.