This page explains what to do after having successfully connected to one of the clusters.
- What is installed on the clusters
To see the list of installed software (modules), load the compiler and the MPI implementation you intend to use and do
Open Source or proprietary?
On our systems, we have compiled and installed software modules either with Intel Compiler and Intel MPI or GCC and MVAPICH2 and those are the only supported compiler/MPI combinations.
Getting the examples
Once you have logged in to the machine, we suggest you download the examples with the command:
Here is a list of our examples:
Running the examples
Enter the directory scitas-examples and choose the example to run by navigating the folders. We have three categories of examples: Basic (examples to get you started), Advanced (including hybrid jobs and job arrays) and Modules (specific examples of installed software).
To run an example, e.g. HPL-mpi of the Advanced category, do:
or, if you do not wish to run on the debug partition,
Running interactive jobs
An interactive job allows you to connect directly to a compute node and type commands that run on the compute node. Simply type the command
Sinteractfrom the login node to start an interactive session with 1 core and 4GB of memory for 30 minutes.
You can use the parameters to
Sinteract(for help type: interact -h) to request more resources or more time.
e.g. to run an mpi job with 16 processes for one hour using 32 GB of memory on a debug node:
On the Izar cluster, the -g option is necessary to request the desired number of GPUs. For example:
Sinteract -g gpu:1