The UL HPC platform provides a large variety of scientific applications to its user community. They are both domain-specific codes and general purpose development tools enabling research and innovation work across a wide set of computational fields.
A new software environment is now available on the Iris HPC cluster, with software packages updated to recent 2019 editions.
This environment will become the default on January 15, 2020, by which time your existing scripts and workflows should be checked against the new application/library versions and updated accordingly, or set up to use the previous software sets (based on 2018 or 2017 software). The current (2018-era) environment will then become legacy, and any new additions and updates to software packages will only happen in the 2019+ environment.
The 2019 environment expands on the preview (development) software environment shown since the summer HPC School. It currently includes over 200 applications and supporting libraries, and while a detailed list is available as always at hpc.uni.lu/users/software, we can highlight:
- Machine/Deep Learning: PyTorch, TensorFlow, Keras, Horovod, Apache Spark
- Math & Optimization: MATLAB, Mathematica, CPLEX
- Physics & Chemistry: GROMACS, ESPResSo, QuantumESPRESSO, Meep, ABINIT, NAMD, NWChem, VASP, CRYSTAL
- Bioinformatics: SAMtools, BEDTools, BWA, BioPerl, FastQC, TopHat, Bowtie2, Trinity, BLAST+, ABySS, HTSlib
- Computer Aided Design & Engineering, Computational Fluid Dynamics: ANSYS, ABAQUS, OpenFOAM
- Development & Performance: GNU Compiler Collection (C/C++/Fortran), Intel Parallel Studio (C/C++/Fortran/MKL, Advisor, ITAC, VTune Amplifier), PGI Compilers (C/C++/Fortan), Python, R, Perl, Go, Rust, Julia, ARM Forge & Performance Reports, Scalasca & Score-P, PAPI
- Container systems: Singularity
With the introduction of the new GPU-AI accelerated compute nodes on Iris,
one of our focus points this year has been to enable accelerated applications that can offload computation on the GPU-AI accelerators.
In support of this, you will find two new compiler toolchains:
toolchain/intelcuda/2019a which include
a recent CUDA library as well as the base community (foss) or commercial (intel) tools with
the corresponding compilers, math and MPI libraries. The PGI compiler suite is now also
part of the software environment and brings support for OpenACC directives enabling easy development for accelerator devices.
Several applications and frameworks/libraries are now built against the CUDA-enabled toolchains, and will work (exclusively) on the GPU-AI accelerators.
In particular we would like to point out that most high-profile Deep Learning frameworks such as PyTorch and TensorFlow have GPU-enabled versions, with GPU/CUDA support also existing or coming soon in many other community applications e.g. for Materials Science, Computational Physics, Chemistry and Life Sciences. GPU support is available also natively in commercial packages such as MATLAB and Mathematica.
Many applications and libraries can also be used through the Singularity container system, with the updated Singularity tool providing many new features of which we can especially highlight support for Open Containers Initiative - OCI containers (including Docker OCI), and support for secure containers - building and running encrypted containers with RSA keys and passphrases.
We would like to remind you that the software environments on the HPC platform can be listed and viewed using
module avail swenv and
module show $modulename (e.g.
module show swenv/default-env/v1.2-20191021-production).
To start using the new software sets immediately (i.e. until the official release on January 15, 2019), you can include before the usual
module load ... command needed for your application:
This will load the latest, 2019 software set which can then be explored using
and any specific software profiles loaded as usual with
module load ....
If you need to keep using the 2018-era software set after January 15th, you will have to explicitly load the specific module(s) for it in your launcher scripts and workflow definition files, e.g.:
1 2 3