This website is deprecated, the old pages are kept online but you should refer in priority to the new web site hpc.uni.lu and the new technical documentation site hpc-docs.uni.lu
Large Memory Nodes - How to Reserve and Use
Specialized computing nodes (GPU and large memory nodes)
As of January 2019 the Iris cluster features GPU accelerated and large memory nodes. They are all based on Skylake-generation CPUs, and are divided in separate partitions:
Compute nodes | Features | SBATCH option | sbatch/srun command line |
---|---|---|---|
iris-169..186 | skylake,volta | #SBATCH -p gpu |
sbatch -p gpu |
iris-191..196 | skylake,volta,volta32 | #SBATCH -p gpu |
sbatch -p gpu |
iris-187..190 | skylake | #SBATCH -p bigmem |
sbatch -p bigmem |
Additional details can be found under our Slurm documentation page.
How to reserve large memory (bigmem) nodes under Slurm?
- Interactive job with 0.5TB of RAM on 1 large memory node
- one task will automatically be reserved with as many cores required in order to fulfill the Memory-per-Core ratio (at ~26GB RAM/core on large memory and GPU nodes)
1
|
|
- Submit a job script to the
bigmem
partition, requesting 64 cores and 2TB of RAM on a single node:- note that additional cores may be reserved for the job in order to preserve the Memory-per-Core ratio (at ~26GB RAM/core on large memory and GPU nodes)
1
|
|
- Submit a job script to the
bigmem
partition, requesting the full node (112 tasks at 1 core/task and all associated RAM, ~3TB):
1
|
|
Additional details can be found in the Slurm examples page.
Troubleshooting
- Take a look at the FAQ which may already include solution to your issue or Report a problem