Logo

HPC @ Uni.lu

High Performance Computing in Luxembourg

This website is deprecated, the old pages are kept online but you should refer in priority to the new web site hpc.uni.lu and the new technical documentation site hpc-docs.uni.lu

Uni.lu-HPC-Facilities_Acceptable-Use-Policy_v2.0.pdf

The University of Luxembourg operates since 2007 a large academic HPC facility which remains the reference implementation within the country, offering a cutting-edge research infrastructure to Luxembourg public research while serving as edge access to the upcoming Euro-HPC Luxembourg supercomputer. Special focus was laid on the development of large computing power combined with huge data storage capacity to accelerate the research performed in intensive computing and large-scale data analytics (Big Data).

The University extends access to its HPC resources (including facilities, services and HPC experts) to its students, staff, research partners (including scientific staff of national public organizations and external partners for the duration of joint research projects) and to industrial partners.

Important All users of UL HPC resources and PIs must abide by the UL HPC Acceptable Use Policy (AUP).


The purpose of this document is to define the rules and terms governing acceptable use of resources (core hours, license hours, data storage capacity as well as network connectivity and technical support), including access, utilization and security of the resources and data.

Kindly read and keep a signed copy of this document before using the facility.

One of the requirements stemming from the AUP is to give acknowledgement to the University of Luxembourg HPC project for ALL publications and contributions having results and/or contents obtained or derived from the usage of the UL HPC platform. The acknowledgement phrase (in LaTeX) should be the following:

{\noindent \textbf{Acknowledgments:}}
The experiments presented in this paper were carried out
using the HPC facilities of the University of Luxembourg~\cite{VBCG_HPCS14}
{\small -- see \href{http://hpc.uni.lu}{hpc.uni.lu}}.

You should cite the reference article whenever it is appropriate. The BibTeX entry to use to refer to the platform as indicated above is the following:

@InProceedings{VBCG_HPCS14,
   author =       {S. Varrette and P. Bouvry and H. Cartiaux and F. Georgatos},
   title =        {Management of an Academic HPC Cluster: The UL Experience},
   booktitle =    {Proc. of the 2014 Intl. Conf. on High Performance Computing \& Simulation (HPCS 2014)},
   year =         {2014},
   pages =        {959--967},
   month =        {July},
   address =      {Bologna, Italy},
   publisher =    {IEEE},
}

Finally, you are also requested to tag the publication(s) you have produced thanks to the usage of the UL HPC platform upon their registration on Orbilu. Even more than the acknowledgement and citation you should put within your articles, this tag is a very important indicator for us to quantify the concrete impact of the HPC facility on the research performed at the University.

Instructions to add the “ULHPC” tag on your OrbiLU are provided on this page.

General guidelines for a reasonable usage

In complement of the AUP, you are generally expected to maintain a reasonable usage of the shared facility offered to you. This includes:

  • to rely on interactive jobs for compiling/testing purposes – the interactive partition is typically meant for that purpose as follows:
# In the sequel, (...)$> is use to denote the prompt of the command and IS NOT PART
# of the command
(access)$> si      # Shortcut to get 1 core for 1h
# Equivalent to:
(access)$>  srun -p interactive --qos debug -C batch  --pty bash
# Eventually, use the batch or GPU partition only if you have specific needs/features tied to these partition
# Ex: Request for a skylake core
(access)$> srun -p batch -C skylake -N 1 --ntasks-per-node 1 --pty bash
# Ex: Request for a GPU node and 1 GPU
(access)$> srun -p gpu -G 1 -N 1 --ntasks-per-node 1 --pty bash
  • for parallel compiling purposes, you may want to increase the number of CPUs (actually cores) per task with -c <N> (make -j $SLURM_CPUS_PER_TASK)
  • optimize your compiled software -where possible- (Ex: with -O3 or -xHost compiler options), use recent or faster compilers, use accelerators (GPUs) or, simply latest software versions provided to you through modules by the UL HPC Team

  • Prepare a launcher script whenever possible. Use the #SBATCH comments to simplify and automate the way you submit the script with the best possible options as default.

  • Consider that any run that require at least 4 full compute nodes is subjected to extra care as it impacts the experience of the other users with the facility which depend on the completion of your job to see their own jobs scheduled. Pay attention to the following criteria:
    • optimize at best the wall-time of your job (-t HH:00:00) and the resources allocated (cpus, memory etc.)
      • always check (with htop typically) that you are really using the resources allocated and not only 1 core…
      • use tests on small cases to best optimize your usage
    • if possible, try to schedule large jobs during the night or the week-end
    • if you need to use more that 20% of the platform, contact the HPC team.

Cluster Maintenance

We try to plan and inform users of cluster maintenance at least 4 weeks in advance by mail (using a dedicated mailing list). In all cases:

  • A dedicated ticket is opened on the HPC tracker and is kept up to date with operations in progress and their completion state – feel free to add yourself as watcher to be notified of changes.
  • A colored banner is displayed on all access servers such that you can quickly be informed of any incoming maintenance operation upon connection to the cluster
  • During the maintenance period, access to the involved cluster frontend is denied and any users still logged-in are disconnected at the beginning of the maintenance
    • if for some reason during the maintenance you urgently need to collect data from your account, please contact the HPC Team by sending a mail to: hpc-team@uni.lu.
  • We will notify you of the end of the maintenance with a summary of the performed operations.

However, be aware that under unexpected circumstances, the operated clusters may require a shutting down without prior notice in the case of an emergency or unavoidable maintenance intervention.

The HPC team reserves the right to intervene in user activity without notice when such activity may destabilize the platform and/or is at the expense of other users, and/or to monitor/verify/debug ongoing system activity.