Logo

HPC @ Uni.lu

High Performance Computing in Luxembourg

This website is deprecated, the old pages are kept online but you should refer in priority to the new web site hpc.uni.lu and the new technical documentation site hpc-docs.uni.lu

and TB for backup

Chaos Storage

Total Raw Capacity:
  • $HOME & $WORK under NFS (xfs over LVM): TB
  • Backup Capacity: TB

In terms of storage, a dedicated NFS server is responsible for sharing specific folders (most importantly, users home directories) across the nodes of the clusters.

The hardware part is composed of a Netapp E5400 disk enclosure, containing 60 disks (3TB SAS 7.2krpm). The raw capacity is 180 TB, and is split in 5 x raid 6 of 10 disks (8+2), 10 other disks are used as spare.

An additional storage device (of the same capacity) is used as backup target. The filesystem is XFS over LVM (Logical Volume Manager)

The current effective shared storage capacity of the NFS on the Chaos cluster is estimated at 110 TB

Gaia Storage

The cluster relies on 4 types of Distributed/Parallel File Systems to deliver high-performant Data storage at a BigData scale (i.e TB, excluding backup).

FileSystem Usage #encl #disk Raw Capacity [TB] Max I/O Bandwidth
XFS Backup 2 120 Read: 1.5 GB/s / Write: 750 GB/s
GPFS Home/Work 4 240 Read: 7 GiB/s / Write: 6.5 GiB/s
Lustre Scratch 5 260 Read: 3 GiB/s / Write: 1.5 GiB/s
Isilon OneFS Projects 29 1044 n/a
  Total: 43 1798 TB (excl. backup)  

The current effective shared storage capacity on the Gaia cluster is estimated to 4356 TB
  • Lustre: 477 TB
  • GPFS: 700 TB
  • Isilon: 3188 TB

GPFS storage

In terms of storage, a dedicated GPFS system is responsible for sharing specific folders (most importantly, users home directories) across the nodes of the clusters.

This system is composed of 8 servers and 4 Netapp E5400 disk enclosures, containing a total of 240 disks (4TB SATA 7.2k rpm). The raw capacity is 960TB, and is split in 24 x raid 6 of 10 disks (8+2).

A benchmark of the GPFS storage on growing number of concurrent nodes have been performed. The results are displayed below:

Lustre storage

We also provide a Lustre storage space (2 MDS servers, 6 OSS servers, 3 Nexsan E60 bays & 2 netapp E5400), which is used as $SCRATCH directory.

A benchmark of the Lustre storage on growing number of concurrent nodes have been performed. The results are displayed below:

Isilon / OneFS

In 2014, the SIU, the UL HPC and the LCSB join their forces (and their funding) to acquire a scalable and modular NAS solution able to sustain the need for an internal big data storage, i.e. provides space for centralized data and backups of all devices used by the UL staff and all research-related data, including the one proceed on the UL HPC platform.

At the end of a public call for tender released in 2014, the EMC Isilon system was finally selected with an effective deployment in 2015. It is physically hosted in the new CDC (Centre de Calcul) server room in the Maison du Savoir. Composed by 16 enclosures featuring the OneFS File System, it currently offers an effective capacity of 1.851 PB.

Local storage

All the nodes provide SSD disks, therefore, you can write in /tmp and get very honest performance in term of I/Os and throughput.

Backup: NFS and GlusterFS

We have recycled all NFS-based storage (that used to host the user and project data) as part of our backup pool. In addition, a GlusterFS based system was deployed on Certon enclosures.

On total, the cumulative storage capacity of the storage area is TB.

I/O Performance

We evaluated the performance of our storage systems:

G5K Luxembourg site Storage

The Luxembourg site of Grid’5000 hosts 2 storage servers:

  • the first server is the NFS, which is attached to a Dell MD1000 disk enclosure. This enclosure contains 14 SATA disks, configured in RAID50, for a raw capacity of 28TB.
  • the second server is dedicated to Storage5k (storage on demand), and contains 6 SAS disks (400GB, 10K rpm), for a raw capacity of 2.4TB.