...
Load the compiler you want to use and python using modules
Code Block language bash $ module load gcc $ module load python
Code Block language bash $ module load intel $ module load python
Create a virtual environment
In the following, we use GCC but the same procedure applies to Intel compiler. In your
home
folder, create a virtual environment:Code Block language bash $ virtualenv -p python3 --system-site-packages opt/venv-gcc Running virtualenv with interpreter /ssoft/spack/arvine/v1/opt/spack/linux-rhel7-skylake_avx512/gcc-8.4.0/python-3.7.7-drpdlwdbo3lmtkcbckq227ypnzno4ek3/bin/python3 Already using interpreter /ssoft/spack/arvine/v1/opt/spack/linux-rhel7-skylake_avx512/gcc-8.4.0/python-3.7.7-drpdlwdbo3lmtkcbckq227ypnzno4ek3/bin/python3 Using base prefix '/ssoft/spack/arvine/v1/opt/spack/linux-rhel7-skylake_avx512/gcc-8.4.0/python-3.7.7-drpdlwdbo3lmtkcbckq227ypnzno4ek3' New python executable in /home/user/opt/venv-gcc/bin/python3 Also creating executable in /home/user/opt/venv-gcc/bin/python Installing setuptools, pip, wheel... done.
Activate virtual environment
Code Block language bash $ source opt/venv-gcc/bin/activate (venv-gcc) [user@izar ~]$
Install Jupyter and ipyparallel
Code Block language bash (venv-gcc) [user@izar ~]$ pip install jupyter ipyparallel Collecting jupyter .. ..
Set a passwordless access to Izar by using ssh key and have the following in your
~/.ssh/config
file on your personal computer. It is assumed that you are inside the EPFL network.Code Block language bash Host izar Hostname izar.epfl.ch User [username]
Run jupyter notebook and ipcluster on Izar.
The script below is a template that allows you to start ipcluster on a compute node of Izar. You can copy it in a file calledlaunch_jupyter.sh
for example. It has to be placed in yourhome
(or you have to modify it accordingly). All Note that all the modules that jupyter may need have to be loaded here otherwise it won't be able to use them. As an example, here, we show the setup necessary loaded the modules that allow us to use TensorFlow to provide an example that match real scenario as much as possible. Please use the modules you need for your casein the notebook.Code Block language bash title launch_jupyter.sh #!/bin/bash -l #SBATCH --job-name=ipcluster #SBATCH --nodes=1 #SBATCH --exclusive #SBATCH --time=01:00:00 #SBATCH --output jupyter-log-%J.out module load gcc/8.4.0-cuda cuda/10.2.89 cudnn/7.6.5.32-10.2-linux-x64 mvapich2/2.3.4-cuda py-tensorflow source opt/venv-gcc/bin/activate profile=job_${SLURM_JOB_ID} echo "creating profile: ${profile}" ipython profile create ${profile} echo "Launching controller" ipcontroller --ip="*" --profile=${profile} & sleep 10 echo "Launching engines" srun ipengine --profile=${profile} --location=$(hostname) 2> /dev/null 1>&2 & ipnport=$(shuf -i8000-9999 -n1) XDG_RUNTIME_DIR="" echo "${hostname}:${ipnport}" > jupyter-notebook-port-and-host jupyter-notebook --no-browser --port=${ipnport} --ip=$(hostname -i)
Launch your job as usual:
Code Block language bash sbatch launch_jupyter.sh
Once the job is running analyze the output jupyter-log-[SLURM_ID].out. Then, look for a line like the following:
Code Block language bash Or copy and paste one of these URLs: http://10.91.27.63:8504/?token=4b17ae5cfa505b5470dc84bb5240ab43ae714aa9480a163c
It has the form:
Code Block language bash http://<IP ADDRESS>:<PORT NUMBER>/?token=<TOKEN>
On your local machine do the following with the information provided by the above step
Code Block language bash ssh -L <PORT NUMBER>:<IP ADDRESS>:<PORT NUMBER> izar -f -N
For our example, this gives:
Code Block language bash ssh -L 8504:10.91.27.63:8504 izar -f -N
Now you should be able to access to Izar compute node through the web browser by pasting the following address
Code Block language bash http://localhost:<PORT NUMBER>/?token=<TOKEN>
For our example, this gives:
Code Block language bash http://localhost:8504/?token=4b17ae5cfa505b5470dc84bb5240ab43ae714aa9480a163c
Create a jupyter nootebook and add the following
Code Block language py import ipyparallel as ipp c = ipp.Client(profile='job_[SLURM_JOB_ID]') view = c[:]
Replace [SLURM_JOB_ID] with the job number you obtain by running the command: squeue -u $USER
...