UL HPC School Program
All sessions will take place at the Belval Campus.
Important You are expected to bring your laptop for all sessions since there will be no workstation available on site.
Agenda - November 25th, 2016
|09h30-10h30||Keynote: Overview and Challenges of the UL HPC Facility at the Belval Horizon||S. Varrette|
|10h45-12h30||PS1: Getting Started on the UL HPC platform (SSH, data transfer, OAR, modules, monitoring)||S. Diehl|
|13h30-15h00||PS2: HPC workflow with sequential jobs||H. Cartiaux|
|15h00-16h00||PS3: Debugging, profiling and performance analysis||V. Plugaru|
|16h15-17h30||PS4: HPC workflow with Parallel/Distributed jobs||V. Plugaru|
|17h30-18h00||Closing Keynote: Take Away Messages||S.Varrette|
PS = Practical Session using your laptop
Detailed Program for the practical sessions
Practical Session 1
Getting Started (ssh, data transfer, OAR, modules, monitoring), by: S. Diehl
This tutorial will guide you through your first steps on the UL HPC platform. We will cover the following topics:
- Platform access via SSH
- Overview of the working environment
- File transfer
- Reserving computing resources with the OAR scheduler and job management
- Usage of the web monitoring interfaces (Monika, Drawgantt, Ganglia)
- Using modules
- Advanced job management and Persistent Terminal Sessions using GNU Screen.
- Illustration on Linux kernel compilation
Practical Session 2
HPC workflow with sequential jobs (test cases on GROMACS, Python and Java), by H. Cartiaux
For many users, the typical usage of the HPC facilities is to execute 1 program with many parameters. On your local machine, you can just start your program 100 times sequentially. However, you will obtain better results if you parallelize the executions on a HPC Cluster.
During this session, we will see 3 use cases:
- Use of the serial launcher (1 node, in sequential and parallel mode);
- Use of the generic launcher, distribute your executions on several nodes (python script);
- The advanced usage of a complex Java framework JCell, designed to work with cellular genetic algorithms (cGAs)
Practical Session 3
Debugging, profiling and performance analysis, by V. Plugaru
Knowing what to do when you experience a problem is half the battle. This session is meant to show you the various tools you have at your disposal to understand and solve problems.
During the hands-on session you will:
- See what happens when an application runs out of memory and how to discover how much memory actually requires.
- Use debugging tools to understand why your application is crashing.
- Use profiling tools to understand the (slow) performance of your application - and how to improve it.
Practical Session 4
HPC workflow with Parallel/Distributed jobs: application on MPI software, by V. Plugaru
The objective of this session is to exemplify the execution of several common, parallel, Computational Fluid Dynamics, Molecular Dynamics and Chemistry software on the UL HPC platform.
Targeted applications include:
- OpenFOAM: CFD package for solving complex fluid flows involving chemical reactions, turbulence and heat transfer
- NAMD: parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems
- ASE: Atomistic Simulation Environment (Python-based) with the aim of setting up, steering, and analyzing atomistic simulations
- ABINIT: materials science package implementing DFT, DFPT, MBPT and TDDFT
- Quantum Espresso: integrated suite of tools for electronic-structure calculations and materials modeling at the nanoscale
In particular, the following topics will be covered:
- loading and using pre-configured versions of these applications on the clusters
- discussion of the parallelization capabilities of these applications
- general discussion on parallel and distributed software