Specialized computing nodes (GPU and large memory nodes)
As of January 2019 the Iris cluster features GPU accelerated and large memory nodes.
They are all based on Skylake-generation CPUs, and are divided in separate partitions:
Compute nodes
Features
SBATCH option
sbatch/srun command line
iris-169..186
skylake,volta
#SBATCH -p gpu
sbatch -p gpu
iris-191..196
skylake,volta,volta32
#SBATCH -p gpu
sbatch -p gpu
iris-187..190
skylake
#SBATCH -p bigmem
sbatch -p bigmem
Additional details can be found under our Slurm documentation page.
How to reserve large memory (bigmem) nodes under Slurm?
Interactive job with 0.5TB of RAM on 1 large memory node
one task will automatically be reserved with as many cores required in order to fulfill the Memory-per-Core ratio (at ~26GB RAM/core on large memory and GPU nodes)
1
srun -p bigmem -N 1 --mem=512G --pty bash -i
Submit a job script to the bigmem partition, requesting 64 cores and 2TB of RAM on a single node:
note that additional cores may be reserved for the job in order to preserve the Memory-per-Core ratio (at ~26GB RAM/core on large memory and GPU nodes)
1
sbatch -N 1 -n 1 -c 64 --mem=2T -p bigmem job.sh
Submit a job script to the bigmem partition, requesting the full node (112 tasks at 1 core/task and all associated RAM, ~3TB):
1
sbatch -N 1 -n 112 -p bigmem job.sh
Additional details can be found in the Slurm examples page.
Troubleshooting
Take a look at the FAQ which may already include solution to your issue or Report a problem