This is a comprehensive general guide on how to use the clusters:
The key to using the clusters is to keep in mind that all tasks (or jobs) need to be given to a batch system called SLURM. With this scheduler, your jobs will be launch according to different factors such as priority, availability of the nodes, etc.
Except for rare cases the idea is not to have real-time interaction and, even in such cases, the jobs are still managed by the batch system.
All the clusters are using SLURM which is widely used and open source http://slurm.schedmd.com .
Running jobs with SLURM
The normal way of working is to create a short script that describes what you need to do and submit it to the batch system using the sbatch command.
Here is an example of a script running a code called moovit :
Any line beginning with #SBATCH is called a directive to the batch system. Type the command man sbatch for the whole explanation and the full list of options.
The six options in the directives are more or less mandatory and do the following:
This is the directory in which the job will be run and the standard output files written. This should ideally point to your scratch space.
The ntasks is the number of tasks (in an MPI sense) to run per job
This is the number of cores per aforementioned task
This is the number of nodes to use - on Castor this is limited to 1 but it's good practice to request it anyway!
The memory required in MB per node
The time required. Note that there are many different formats to specify the time. See the manual of sbatch (by typing man sbatch) and look for the details on this option.
If the time and memory are not specified then default values will be imposed - these may well be lower than required!
This script should be saved to a file, for example
moojob1.run and in order to submit it we run the following command from one of the login nodes:
The output will look something like
The number returned is the Job ID and is the key to finding out further information or modifying the task.
To cancel a specific job:
To cancel all your jobs (use with care!):
To cancel all your jobs that are not yet running:
Getting Job Information
There are a number of different tools that can be used to query jobs depending on exactly what information is needed.
If the name of a tool begins with a capital S then it is a SCITAS specific tool. Any tool whose name starts with a small s is part of the base SLURM distribution.
Squeue shows information about all your jobs be they running or pending.
squeue will show you all the jobs from all users. This information can be modified by passing options to squeue.
To see all the running jobs from the scitas group we run:
man squeue for all the options.
For example, the
Squeue command described above is actually a script that calls:
scontrol will show you everything that the system knows about a running or pending job.
Sjob is particularly useful to find out information about jobs that have recently finished.
Modules and provided software
Modules (LMod) is utility that allows multiple, often incompatible, tools and libraries to exist on a cluster.
Scientific tools and libraries are provided as modules and you can see what is available by running the command module avail :
Initially you will only see the base modules - these are either compilers or stand alone packages such as MATLAB.
In order to see more modules including libraries and MPI distributions you need to load a compiler, typically
The full guide to how to use modules can be found here.
In your submission script we strongly recommend that you begin with the command
module purge and then load the module you need to as to ensure that you always have the correct environment.
Examples of submission scripts
There are a number of examples available on a git repository. To download these run the following command from one of the clusters:
Enter the directory scitas-examples and choose the example to run by navigating the folders.
We have three categories of examples:
- Basic (examples to get you started)
- Advanced (including hybrid jobs and job arrays)
- Modules (specific examples of installed software).
To run an example (here: hybrid HPL), do:
Or, if you do not wish to run on the debug partition:
Running MPI jobs
MPI is the acronym for Message Passing Interface and is now the de facto standard for distributed memory parallelisation.
It's an open standard with multiple implementations and we are now at version 3.
There are multiple MPI flavours that comply with the specification and each claims to have some advantage over the other.
Some are vendor specific and others are open source.
On the SCITAS clusters we fully support the following compiler/MPI combinations (July 2018 until July 2019):
This is a SCITAS restriction to prevent chaos - nothing technically stops one from mixing! They all work well and have good performance.
If we have a MPI code we need some way of correctly launching it across multiple nodes. To do this we use srun which is a SLURM’s built-in job launcher:
To specify how many ranks and the number of nodes, we add the relevant
#SBATCH directives to the job script.
For example to launch our code on 4 nodes with 16 ranks per node we specify:
There is no need to specify the number of ranks when you call srun as inherits the value from the allocation.
Running OpenMP jobs
When running an OpenMP or hybrid OpenMP/MPI job the important thing to set is the number of OpenMP threads per process via the variable OMP_NUM_THREADS.
If this is not specified it often defaults to the number of processors in the system.
We can integrate this with SLURM as seen for the following hybrid (4 ranks, 4 threads per rank) task:
This takes the environmental variable set by SLURM and assigns the value to OMP_NUM_THREADS.
If you run such hybrid jobs we advise you to read the page on CPU affinity.
The Debug Partition
All the clusters have a few nodes that only allow short jobs and are intended to give you quick access to allow you to debug jobs or quickly test input files.
To use these nodes you can either add the
#SBATCH -p debug directive to your job script or specify it on the command line:
Please note that the debug nodes must not be used for production runs of short jobs.
Any such use will result in access to the clusters being revoked.
There are two main methods of getting interactive (rather than batch) access to the machines. They have different use cases and advantages.
The Sinteract command allows you to log onto a compute node and run applications directly on it.
This can be especially useful for graphical applications such as Matlab and Comsol.
Use of graphical applications (X11) on the clusters
To be able to use graphical applications, you must first have connected to the login node with the
-X option. (The use of
-Y is unnecessary and highly discouraged as it is a security risk.)
Additionally you must have configured password-less login within the cluster with your default RSA ssh key (
The quick check is that the following command must work without any user intervention (in this case it is executed from the Fidis cluster login node)
Thw following operation which will destroy any existing keys, please contact us if you are unsure of what you are doing.
To set up password-less login within the cluster and only if you do not yet have a default RSA ssh key (
~/.ssh/id_rsa), please run the following two commands:
In case you already have or need other keys installed (which have a password set as recommended), you should rename them and use a
.ssh/config file to make sure they are used instead of the default key for any services that need them.
salloc creates a reservation on the system that you can then access via srun.
It allows you to run multi-node MPI jobs in an interactive manner and is very useful for debugging problems with such tasks:
To gain interactive access on the node, we suggest you use
If you wish to achieve a similar result with
salloc, you can type, after having had access to your job allocation:
srun --pty bash
or, if you need a graphical display (see other preconditions above)
srun --x11 --pty bash