As mentioned earlier in the Merry Christmas & Happy NY 2019, we are now pleased to announce the release of the 2018 Iris computing capability extension. You can now find 22 new nodes, that contain either GPUs or large memory capacity. It totals 952 cores and +77.77 TFlops for the CPU computing power and +561.6 TFlops for the GPUs.
Bought at the end of 2018, there are 18 GPU nodes of type Dell C4140 that each contain 768 GB of RAM, 2 Intel Xeon Gold firstname.lastname@example.org GHz [2x14c] and 4 Nvidia V100 SXM2 connected via NVLink.
Initial benchmarks performed after installation and configuration have shown a performance of 32.3 TFlops for the CPUs and 283.6 TFlops for the GPUs on these nodes as measured by the HPL benchmark.
You can find in the documentation page related to GPU, the way how to access those nodes and run code on them. Tutorials specific for GPU will be written and available for the next HPC school but you can already have a look at the last HPC school tutorials on Deep learning to have an idea on how to use TensorFlow on the nodes.
Bought at the end of 2018, there are 4 Bigmem nodes, Dell R840 that each contain 3072 TB of RAM, 4 Intel Xeon Platinum 8180M@2.5 GHz [4x28c].
Initial benchmarks performed after installation and configuration has shown a performance of 24.5 TFlops on these nodes.
You can find in the documentation page related to Bigmem, the way to access these nodes and run code on them. You can have a look at the last HPC school tutorials on Big Data to have an idea on how to exploit all the potential of those nodes.