Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.


The key to using the clusters is to keep in mind that all tasks (or jobs) need to be given to a batch system called SLURM.  With With this scheduler, your jobs will be launch launched according to different factors such as priority, availability of the nodes, etc.

Except for rare cases, the idea is not to have real-time interaction and, even in such cases, the jobs are still managed by the batch system.

All the clusters are using SLURM, which is widely used and open source source .

Running jobs with SLURM


Currently, only Fidis has a serial partition with a "pay-as-you-use" policy. On the other clusters, you will be charged for the whole nodes even if you use only a fraction of them.

In the case you only need a few cores, please make sure to use Fidis' serial partition. Failing to do so will increase the cost of your simulation!

More details can be found on the 2020 Annual Maintenance page.

The normal way of working is to create a short script that describes what you need to do and submit it to the batch system using the sbatch command.


Any line beginning with #SBATCH is called a directive to the batch system. Type the command  man sbatch  for the whole explanation and the full list of options.


This is the directory in which the job will be run and the standard output files written. This should ideally point to your scratch space. Please, have a look at the file system documentation to know which data is backed up and which one may be deleted without notice.

Code Block
--ntasks 1