Logo

HPC @ Uni.lu

High Performance Computing in Luxembourg

Dedicated partitions

Have a look at this page: Slurm

As of January 2019 the iris cluster features also GPU accelerated and large memory nodes. They are all based on the Skylake-generation CPUs, and are divided in separate partitions with nodes homogeneous per partition:

Compute nodes Features SBATCH option sbatch/srun command line
iris-169..186 skylake,volta #SBATCH -p gpu sbatch -p gpu
iris-187..190 skylake #SBATCH -p bigmem sbatch -p bigmem

How to reserve GPUs on Slurm?

Have a look at this page: Slurm examples

  • Submit a job script to the gpu partition, requesting 2 cores and 2 GPUs on a single node:
1
sbatch -N 1 -n 2 --gres=gpu:2 -p gpu --qos qos-gpu job.sh
  • Submit a job script to the gpu partition, requesting 4 nodes with 2 cores/node and 4 GPUs/node:
1
sbatch -N 4 --ntasks-per-node=2 --gres=gpu:4 -p gpu --qos qos-gpu job.sh

How to run code ?

Look at the ULHPC tutorials especially the ones on Deep learning

The Python version and Tensorflow can be used with Modules

How to use X11 forwarding on GPU ?

Look at the FAQ entry to know how to use X11 forwarding.

Troubleshooting

Using Bigmem