Logo

HPC @ Uni.lu

High Performance Computing in Luxembourg

Dedicated partitions

Have a look at this page: Slurm

As of January 2019 the iris cluster features also GPU accelerated and large memory nodes. They are all based on the Skylake-generation CPUs, and are divided in separate partitions with nodes homogeneous per partition:

Compute nodes Features SBATCH option sbatch/srun command line
iris-169..186 skylake,volta #SBATCH -p gpu sbatch -p gpu
iris-187..190 skylake #SBATCH -p bigmem sbatch -p bigmem

How to reserve Bigmem on Slurm?

Have a look at this page: Slurm examples

  • Submit a job script to the bigmem partition, requesting 64 cores and 2TB of RAM on a single node:
1
sbatch -N 1 -n 64 --mem=2T -p bigmem --qos qos-bigmem job.sh
  • Submit a job script to the bigmem partition, requesting the full node (112 cores and all associated RAM, ~3TB):
1
sbatch -N 1 -n 112 -p bigmem --qos qos-bigmem job.sh

Troubleshooting

Using GPUs