This website is deprecated, the old pages are kept online but you should refer in priority to the new web site hpc.uni.lu and the new technical documentation site hpc-docs.uni.lu
UL HPC Clusters
The UL HPC platform is composed of 5 clusters:
Clusters organization
Each cluster is organized according to the following generic scheme:
As can be seen, our clusters consist of:
- a front-end server access-*.uni.lu that serves as an interface and access point to the cluster
- an adminfront server, which is only relevant for the management team. It hosts as many VMs (under the Xen hypervisor) as there are services to operate the cluster
- a shared storage area, based on NFS and, in somes cases, Lustre, which permit the sharing of data across the cluster
- a set of computing nodes which can be reserved and used for a given period of time by the users of the cluster via the OAR batch scheduler.
- a fast interconnect topology that establishes high performance communication between the cluster elements
Hardware Overview
All clusters feature Intel processors, under the following micro architectures (newest to oldest):
- Intel Xeon Broadwell
- Intel Xeon Haswell
- Intel Xeon Ivybridge
- Intel Xeon Sandybridge
- Intel Xeon Westmere
Operating Systems
Each computing node runs the Centos Linux operating system, version 7 (or Debian Linux in the case of Chaos, Gaia and G5K) and you’ll have to interface with them by using the command line.
Thus, you are more than encouraged to become familiar - if not yet - with Linux commands. We can recommend the following sites and resources: