Logo

HPC @ Uni.lu

High Performance Computing in Luxembourg

Overview

The cluster is organized as follows (click to enlarge):

Download the PDF

Overview of the Iris cluster

Computing Capacity

The cluster is composed of the following computing elements:

Important The Broadwell processors of iris carry on 16 DP ops/cycle and support AVX2/FMA3.

Interconnect

The following schema describes the topology of the Iris Infiniband EDR (100Gb/s) Network.

Additionally, the cluster is connected to the infrastructure of the University using 2x40Gb Ethernet links and to the internet using 2x10Gb Ethernet links.

A third 1Gb Ethernet network is also used on the cluster, mainly for services and administration purposes.

Performances of the network have been measured using MVAPICH OSU Micro-Benchmarks . The results are presented below.

  • [](/images/benchs/benchmark_OSU-iris_latency.pdf)
  • []( /images/benchs/benchmark_OSU-iris_bandwidth.pdf)

Storage / Cluster File System

The cluster relies on 2 types of Distributed/Parallel File Systems to deliver high-performant Data storage at a BigData scale (i.e TB).

FileSystem Usage #encl #disk Raw Capacity [TB] Max I/O Bandwidth
SpectrumScale (GPFS) Home 4 248 Read: 10 GiB/s / Write: 10 GiB/s
Isilon OneFS Projects 23 828 n/a
  Total: 27 1076 3936 TB  

The current effective shared storage capacity on the Iris cluster is estimated to 3.54 PB
  • GPFS: 1045 TB
  • Isilon: 2496 TB

GPFS storage

In terms of storage, a dedicated SpectrumScale (GPFS) system is responsible for sharing specific folders (most importantly, users home directories) across the nodes of the clusters.

A DDN GridScaler solution hosts the SpectrumScale Filesystem and is composed of a GS7K base enclosure ( NSD) and 3 SS8460 expansion enclosures, containing a total of 248 disks (240x 6TB SED + 8 SSD). The raw capacity is 1440TB, and is split in 24 x raid 6 of 10 disks (8+2).

Isilon / OneFS

In 2014, the SIU, the UL HPC and the LCSB join their forces (and their funding) to acquire a scalable and modular NAS solution able to sustain the need for an internal big data storage, i.e. provides space for centralized data and backups of all devices used by the UL staff and all research-related data, including the one proceed on the UL HPC platform.

At the end of a public call for tender released in 2014, the EMC Isilon system was finally selected with an effective deployment in 2015. It is physically hosted in the new CDC (Centre de Calcul) server room in the Maison du Savoir. Composed by 16 enclosures featuring the OneFS File System, it currently offers an effective capacity of 1.851 PB.

Local storage

All the nodes provide SSD disks, therefore, you can write in /tmp and get very honest performance in term of I/Os and throughput.

History

The Iris cluster exists since the beginning of 2017 as the most powerful computing platform available within the University of Luxembourg.

  • March 2017: Initialization of the cluster composed of:

    • iris-[1-100], Dell PowerEdge C6320, 100 nodes, 2800 cores, 12.8 TB RAM
    • 10/40GB Ethernet network, high-speed Infiniband EDR 100Gb/s interconnect
    • SpectrumScale (GPFS) core storage, 1.44 PB
    • Redundant / load-balanced services with:
      • 2x adminfront servers (cluster management)
      • 2x access servers (user frontend)