Logo

HPC @ Uni.lu

High Performance Computing in Luxembourg

This website is deprecated, the old pages are kept online but you should refer in priority to the new web site hpc.uni.lu and the new technical documentation site hpc-docs.uni.lu

All sessions will take place at the Belval Campus.

Important You are expected to bring your laptop for all sessions since there will be no workstation available on site.

All tutorials proposed as practical sessions will be available on GitHub.
The detailed program is available here.

Agenda - November 25th, 2016

November 25th Session Presenter
09h30-10h30 Keynote: Overview and Challenges of the UL HPC Facility at the Belval Horizon S. Varrette
10h45-12h30 PS1: Getting Started on the UL HPC platform (SSH, data transfer, OAR, modules, monitoring) S. Diehl
12h30-13h45 Lunch  
13h30-15h00 PS2: HPC workflow with sequential jobs H. Cartiaux
15h00-16h00 PS3: Debugging, profiling and performance analysis V. Plugaru
16h00-16h15 Coffee break  
16h15-17h30 PS4: HPC workflow with Parallel/Distributed jobs V. Plugaru
17h30-18h00 Closing Keynote: Take Away Messages S.Varrette

PS = Practical Session using your laptop




Detailed Program for the practical sessions

The tutorials and session details are currently being updated.

Practical Session 1

Getting Started (ssh, data transfer, OAR, modules, monitoring), by: S. Diehl

Online instructions: on ReadTheDocs or Github

This tutorial will guide you through your first steps on the UL HPC platform. We will cover the following topics:

  • Platform access via SSH
  • Overview of the working environment
  • File transfer
  • Reserving computing resources with the OAR scheduler and job management
  • Usage of the web monitoring interfaces (Monika, Drawgantt, Ganglia)
  • Using modules
  • Advanced job management and Persistent Terminal Sessions using GNU Screen.
  • Illustration on Linux kernel compilation

Practical Session 2

HPC workflow with sequential jobs (test cases on GROMACS, Python and Java), by H. Cartiaux

Slides and online instructions: on ReadTheDocs or Github

For many users, the typical usage of the HPC facilities is to execute 1 program with many parameters. On your local machine, you can just start your program 100 times sequentially. However, you will obtain better results if you parallelize the executions on a HPC Cluster.

During this session, we will see 3 use cases:

  1. Use of the serial launcher (1 node, in sequential and parallel mode);
  2. Use of the generic launcher, distribute your executions on several nodes (python script);
  3. The advanced usage of a complex Java framework JCell, designed to work with cellular genetic algorithms (cGAs)

Practical Session 3

Debugging, profiling and performance analysis, by V. Plugaru

Knowing what to do when you experience a problem is half the battle. This session is meant to show you the various tools you have at your disposal to understand and solve problems.

During the hands-on session you will:

  1. See what happens when an application runs out of memory and how to discover how much memory actually requires.
  2. Use debugging tools to understand why your application is crashing.
  3. Use profiling tools to understand the (slow) performance of your application - and how to improve it.

Practical Session 4

HPC workflow with Parallel/Distributed jobs: application on MPI software, by V. Plugaru

Online instructions: on ReadTheDocs or Github

The objective of this session is to exemplify the execution of several common, parallel, Computational Fluid Dynamics, Molecular Dynamics and Chemistry software on the UL HPC platform.

Targeted applications include:

  • OpenFOAM: CFD package for solving complex fluid flows involving chemical reactions, turbulence and heat transfer
  • NAMD: parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems
  • ASE: Atomistic Simulation Environment (Python-based) with the aim of setting up, steering, and analyzing atomistic simulations
  • ABINIT: materials science package implementing DFT, DFPT, MBPT and TDDFT
  • Quantum Espresso: integrated suite of tools for electronic-structure calculations and materials modeling at the nanoscale

In particular, the following topics will be covered:

  • loading and using pre-configured versions of these applications on the clusters
  • discussion of the parallelization capabilities of these applications
  • general discussion on parallel and distributed software