This website is deprecated, the old pages are kept online but you should refer in priority to the new web site hpc.uni.lu and the new technical documentation site hpc-docs.uni.lu
GPU Nodes - How to Reserve and Use GPUs
Specialized computing nodes (GPU and large memory nodes)
As of January 2019 the Iris cluster features GPU accelerated and large memory nodes.
They are all based on Skylake-generation CPUs, and are divided in separate partitions:
Compute nodes
Features
SBATCH option
sbatch/srun command line
iris-169..186
skylake,volta
#SBATCH -p gpu
sbatch -p gpu
iris-191..196
skylake,volta,volta32
#SBATCH -p gpu
sbatch -p gpu
iris-187..190
skylake
#SBATCH -p bigmem
sbatch -p bigmem
Additional details can be found under our Slurm documentation page.
How to reserve GPUs under Slurm?
Interactive job on 1 GPU
note that you will not be able to run parallel code with srun under interactive jobs (srun under srun)
1
srun -G 1 -p gpu --pty bash -i
Interactive job with 4 GPUs on the same node
note that you will not be able to run parallel code with srun under interactive jobs (srun under srun)
1
srun -N 1 -G 4 -p gpu --pty bash -i
Interactive job with 4 GPUs on the same node, one task per gpu, 7 cores per task:
note that you will not be able to run parallel code with srun under interactive jobs (srun under srun)
1
srun -N 1 -G 4 -n 4 -c 7 -p gpu --pty bash -i
Submit a job script to the gpu partition, requesting 2 tasks and 2 GPUs on a single node: