Documentation access will be interrupted from time to time due to some bug correction.


Page tree
Skip to end of metadata
Go to start of metadata

When you want to run on Fidis and its Gacrux extension

Nodes specifications

Both types of nodes are accessible from the Fidis frontend (ssh fidis.epfl.ch)

FidisGacrux
408 compute nodes
  • 2 x Xeon E5-2690 v4 processors (each with 14 cores @ 2.6 GHz)
  • 336 nodes have 128 GB of RAM,

  • 72 nodes have 256 GB of RAM

Infiniband FDR fully-non-blocking connectivity with a fat-tree topology

216 compute nodes

  • 2 x Xeon 6132 processors (each with 14 cores @ 2.6 GHz)
  • 192 GB memory
  • EDR Infiniband interconnect

The nodes are arranged in non-blocking groups of 24 and the “Skylake" architecture offers significantly increased memory bandwidth.


Running on Gacrux

Please note that to make use of the new AVX-512 instructions your codes will need to be recompiled. The centrally provided codes and libraries available through modules have been optimised for the new architecture.


Running on the Fidis nodes


If you wish to use only the Fidis nodes then please specify:


#SBATCH --constraint=E5v4

Running on the Gacrux nodes



If you wish to specifically ask for Gacrux nodes then please use the following SLURM directive:



#SBATCH --constraint=s6g1



If you do not specify a constraint then jobs may run on either partition but they will never span different architectures.


Debug nodes

Two of the Gacrux nodes are available through the debug partition along with four Fidis nodes.


Build nodes

Two Gacrux nodes and two Fidis nodes are available for compiling codes via the build partition