...
Overview of available file systems
File system + mount point Environment Variable for Access | Purpose | Type and Size | Backup and snapshots | Intended lifetime | Cleanup strategy | Quota | Available for |
---|---|---|---|---|---|---|---|
/home $HOME | Store source files, input data, small files Globally accessible from login and compute nodes | GPFS, 100TB | backup to tape and snapshots | Account lifetime | No cleanup | 100GB per user | All users |
/work $WORK | Collaboration space for a group Globally accessible from login and compute nodes | GPFS, 100TB | Backup upon request, at cost price. Snapshots | Group lifetime | No cleanup | 50GB* per group | All EPFL units (upon request). Not available for bachelor/master students. |
/scratch $SCRATCH | Temporary huge result files Accessible from the frontend and compute nodes within one cluster. | Fidis: Helvetios: Izar: | No backup, no snapshots | 2 weeks | Automatic deletion of files and empty directories older than two weeks may happen without notice | Quotas may be in place to prevent runaway tasks filling the filesystem. A typical limit would be 1/3 or 1/2 of the total space. | All users |
/tmp/${SLURM_JOB_ID} $TMPDIR | Temporary, local file space for jobs on compute nodes. Not available on login nodes. | local (node), between 64 and 512 GB | No backup, no snapshots | Job execution | Content is deleted after job end | no | All users |
* Space on work is charged for and, as such, the quota of the group depends on the amount of space purchased. There is no backup by default but a laboratory may request such a service. The price for backup will be the cost price to SCITAS.
...