The clusters come with a common programming environment. Here is how to use the software included within.
Software policy and release cycles
SCITAS provides up to three software releases on clusters:
|deprecated||This is the old production environment and is no longer supported, but will be retained for one year.|
|stable||This is the current stable release for the year in progress. We guarantee that the modules here will work and that modules will not be removed.|
|future||This is the testing area for what will become the next production environment and it is not guaranteed to exist! Modules in here are not guaranteed to work and may be removed without warning.|
When you connect to the clusters you will see stable, i.e. the current production release. To switch to a different release see the section Reverting to the old environment below.
Supported software stacks
SCITAS fully supports the following software stacks - a stack being composed of a compiler, an MPI library and a LAPACK library - on its clusters:
|MVAPICH 2.3.1||Openblas 0.3.6|
|Intel||Intel 2018 Update 5||Intel MPI 2018.4.274||Intel MKL 2018.5.274|
The full support implies a commitment to act as promptly as possible to fix any issue in the software that has already been installed as part of either stack.
For the convenience of its users, SCITAS also supports on a best effort basis the following:
|GNU 8.3.0||GCC 8.3.0||MVAPICH 2.3.1||OpenBLAS 0.3.6|
Software libraries (e.g. FFTW, HDF5, etc.) will be installed for all the aforementioned combinations. These libraries may have multiple versions due to the implementation and functionality required. For instance, the FFTW library is installed with different levels of support for multi-threading and MPI.
End-user applications (e.g. Quantum-ESPRESSO, Gromacs, etc.) will be installed only as part of the two fully supported stacks and only in one version (with the configuration decided by the SCITAS application experts). If users require different options then SCITAS will provide assistance for the user to compile their own version.
Modules and LMOD
The SCITAS managed clusters use the LMOD tool to manage scientific software. This is compatible with the classical Modules tool but brings with it a large number of improvements.
The official LMOD documentation can be consulted at: http://lmod.readthedocs.io/en/latest/
A slightly simplified example of using LMOD is:
Connect to a cluster and see what base modules are available. These are either compilers or stand alone packages such as MATLAB.
Load a compiler to see the modules built with the chosen compiler. These may be scientific libraries, serial (non-MPI) codes or MPI libraries.
Load a MPI library to see the modules that use this MPI flavour
LMOD knows which modules are incompatible and will take the neccessary steps to ensure a consistent environment:
Searching for software
Currently if you want to search for software you need to load one software stack. For example
The slightly ugly reality
In reality running "module avail fftw" returns (after having loaded gcc and mvapich2)
The names are <module name > / <version - options > with the options being the "key" configuration options such as MPI or OpenMP activation.
The (D) after a module name indicates that, if there are two or more version of the same package available then this is the version that would be loaded by default.
If you need a specific version due to the options with which it is built then you have to specify the full name:
If you really want to know how a module was built then you need to run "module whatis <modulename>" which will contain, wherever possible, the list of options used at configure time:
Saving your environment
If you have a few sets of modules that you use regularly then a nice feature of LMOD is the ability to save sets of modules and reload them with a single command
Because each cluster has a different base module path then the saved set is only valid for one architecture (the system type given whan you save).
If you try and load a module collection on a different system type you will see:
For this reason you should never use module restore in job scripts. You can, of course, save the same set of modules with the same name on multiple clusters so as to have the same environment everywhere.
GPU environment and heterogeneous clusters
For homogeneous clusters such as Fidis and Castor the environment on the compute nodes is the same as that on the front-end machines. Deneb is slightly different as it has a partition containing machines with GPUs as well as a slightly different Infiniband configuration. If you wish to have access to the GPU node environment (i.e. the CUDA runtime and correct MPI) on the login machines then run:
To switch back to the architecture of the machine on which you are run the command without options:
The list of architectures you can currently switch to is:
|Architecture||Cluster(s) with nodes of this kind|
Deneb (deprecated only)
|x86_E5v2_Mellanox_GPU||Deneb (deprecated only)|
|x86_E5v3_IntelIB||Deneb (deprecated only)|
Reverting to the old environment
In order to revert to the old environment then run "slmodules -r deprecated"
In a job script you need to source the full command "/ssoft/spack/bin/slmodules.sh":
Behind the scenes
The software environment on the clusters is managed using SPACK. EPFL is a major contributor to the tool: http://software.llnl.gov/spack/
This allows us to deploy software for multiple architectures and compiler/mpi variants in a consistant and automated manner.