Logo

HPC @ Uni.lu

High Performance Computing in Luxembourg

This website is deprecated, the old pages are kept online but you should refer in priority to the new web site hpc.uni.lu and the new technical documentation site hpc-docs.uni.lu

Specialized computing nodes (GPU and large memory nodes)

As of January 2019 the Iris cluster features GPU accelerated and large memory nodes. They are all based on Skylake-generation CPUs, and are divided in separate partitions:

Compute nodes Features SBATCH option sbatch/srun command line
iris-169..186 skylake,volta #SBATCH -p gpu sbatch -p gpu
iris-191..196 skylake,volta,volta32 #SBATCH -p gpu sbatch -p gpu
iris-187..190 skylake #SBATCH -p bigmem sbatch -p bigmem

Additional details can be found under our Slurm documentation page.

How to reserve GPUs under Slurm?

  • Interactive job on 1 GPU
    • note that you will not be able to run parallel code with srun under interactive jobs (srun under srun)
1
srun -G 1 -p gpu --pty bash -i
  • Interactive job with 4 GPUs on the same node
    • note that you will not be able to run parallel code with srun under interactive jobs (srun under srun)
1
srun -N 1 -G 4 -p gpu --pty bash -i
  • Interactive job with 4 GPUs on the same node, one task per gpu, 7 cores per task:
    • note that you will not be able to run parallel code with srun under interactive jobs (srun under srun)
1
srun -N 1 -G 4 -n 4 -c 7 -p gpu --pty bash -i
  • Submit a job script to the gpu partition, requesting 2 tasks and 2 GPUs on a single node:
    • all examples give the same result
1
2
3
4
sbatch -N 1 -n 2 --gpus=2 -p gpu job.sh
sbatch -N 1 -n 2 --gpus-per-node=2 -p gpu job.sh
sbatch -N 1 -n 2 --gpus-per-task=1 -p gpu job.sh
sbatch -N 1 -n 2 --gres=gpu:2 -p gpu job.sh
  • Submit a job script to the gpu partition, requesting 4 nodes with 4 tasks/node and 4 GPUs/node:
1
2
sbatch -N 4 --ntasks-per-node=4 --gpus-per-task=1 -p gpu job.sh
sbatch -N 4 --ntasks-per-node=4 --gres=gpu:4 -p gpu job.sh
  • Submit a job script to the gpu partition requesting 16 GPUs and 1 core per GPU:
1
sbatch -G 16 --cpus-per-gpu=1 -p gpu job.sh

Additional details can be found in the Slurm examples page.

How to reserve GPUs with more memory (32GB on-board HBM2)?

You will need to use the feature constraints of Slurm, specifically -C volta32.

  • Reserve a GPU with 32GB on-board memory in interactive mode
1
srun -G 1 -C volta32 -p gpu --pty bash -i
  • Submit a job on 4 nodes with 4 GPUs each (GPUs with 32GB on-board memory):
1
sbatch -N 4 --ntasks-per-node=4 --gpus-per-task=1 -C volta32 -p gpu job.sh

How to use X11 forwarding with GPU allocations ?

  • Use the --x11 flag in your interactive job launch:
1
srun -G 1 --x11 --pty bash -i

Troubleshooting

Using large memory nodes