Logo

HPC @ Uni.lu

High Performance Computing in Luxembourg

This HPC facility is shared among multiple users therefore you are asked to be reasonable in your usage of this shared platform. For instance:

  • Plan large scale experiments during off-load periods: night-time or week-ends; OR, use OAR’s “besteffort” class
  • Comply with the resource limits as they are defined in the queueing system and/or communicated to you.
  • You should supply efficient jobs of reasonable walltimes, typically finishing within 1 to 3 days.
  • You should not ask for more than 2 reservations in advance, because it kills down resource usage. You should try to submit batch jobs instead.
  • You SHALL NOT share your account or your credentials with other users / external parties
  • You should not run CPU / memory / IO intensive applications on the access nodes of the clusters
  • We do everything to provide a best effort service: maintain data availability and integrity via redundancy in storage blocks, yet you should always maintain a copy of the data critical to your work’s continuity on your own.
Important For ALL publications and contributions having results and/or contents obtained or derived from the usage of the UL HPC platform, you are requested to perform the following actions:
  1. Mention the HPC facility @ Uni.lu in the acknowledgment section, using the official banner mentionned below.
  2. Tag your publication upon registration on OrbiLu.

Official Acknowledgement banner

You must mention the HPC facility @ Uni.lu in all publications presenting results or contents obtained or derived from the usage of this platform. The official acknowledgment banner to use in your publication must be the following (in LaTeX) or equivalent:

Acknowledgment:

the experiments presented in this paper were carried out
using the HPC facilities of the University of Luxembourg~\cite{VBCG_HPCS14} 
{\small -- see \url{http://hpc.uni.lu}}.

The BibTeX entry to use to refer to the platform as indicated above is the following:

@InProceedings{VBCG_HPCS14,
   author =       {S. Varrette and P. Bouvry and H. Cartiaux and F. Georgatos},
   title =        {Management of an Academic HPC Cluster: The UL Experience},
   booktitle =    {Proc. of the 2014 Intl. Conf. on High Performance Computing \& Simulation (HPCS 2014)},
   year =         {2014},
   pages =        {959--967},
   month =        {July},
   address =      {Bologna, Italy},
   publisher =    {IEEE},
}

ULHPC Tag on OrbiLu

You are also requested to tag the publication(s) you have produced thanks to the usage of the UL HPC platform upon their registration on Orbilu. Even more than the acknowledgement and citation you should put within your articles, this tag is a very important indicator for us to quantify the concrete impact of the HPC facility on the research performed at the UL.

Instructions to add the “ULHPC” tag on your OrbiLU are provided on this page.

General guidelines for a reasonable usage

For the moment, there is no fixed maximum limit on the volume of resources (eg. CPU hours) that can be consumed.

You are expected to maintain the following reasonable usage:

  • for compiling/testing, enter into interactive mode with typically 4 cores (they may be distributed among machines, in order to test cross-node communication). oarsub -I -l nodes=2/core=2,walltime=4 should do the job at this level and it should be sufficient to test most aspects of user jobs, including application building and parallel execution.
  • when preparing a batch (passive) job, adapt the launcher scripts as mentioned in the OAR documentation
  • optimize your compiled jobs -where possible- with -O3, use recent or faster compilers, use accelerators (GPUs) or, simply latest software versions
  • when going for bigger runs, do not reserve more than 120 cores during working day and working hours
  • during the night or the week-end, you may use more resources; yet, in that case prefer the best-effort mode combined with checkpointing. In all circumstances, never use more than 50% of the platform.

We try to plan and inform users of cluster maintenance (via a dedicated mailing list), however the cluster may require shutting down without prior notice in the case of an emergency or unavoidable maintenance intervention. The HPC team reserves the right to intervene in user activity without notice when such activity may destabilize the platform and/or is at the expense of other users, and/or to monitor/verify/debug ongoing system activity.