Singularity - How to Run on HPC¶
We have discussed about the containers already Singularity Containers. You can have a look at it if you wish to get an insight into the world of containers. In this section we deal with its usage on the HPC.
Warning
On the HPC, Singularity containers must be only run on compute nodes as they are too resource-intensive for login nodes
Commands at a glance¶
Singularity commands mostly used are given below. The container.sif
refers to a sample container and example.def
refers to a sample definition file
#Build a container singularity build container.sif example.def #pull an existing container from a hub singularity pull docker://gcc #Shell Into the container singularity shell container.sif #running a container (More on this in subsequent sections) singularity run container.sif #attach overlay to the existing container singularity shell --overlay overlay.ext3 container.sif #Execute a command inside container singularity exec container.sif ls -l #Mounting/Binding a filesystem to the container singularity -B /share/apps/NYUAD #Singularity help (very helpful) singularity help singularity help build
Interact With Container¶
Shell
The shell
command allows you to spawn a new shell within your container and interact with it as though it were a small virtual machine.
singularity shell hello-world.sif
Don’t forget to exit
when you’re done.
Executing commands
The exec
command allows you to execute a custom command within a container by specifying the image file.
singularity exec hello-world.sif ls -l /
singularity exec hello-world.sif /scratch/user/userid/myprogram
Running a container
Execute the default runscript defined in the container
singularity run hello-world.sif
Files in a container¶
The filesystem inside the container is isolated from the filesystem outside the container. In order to access your files on a real, physical filesystem, you have to ensure that filesystem’s directory is mounted. By default, Singularity will mount the $HOME
, $SCRATCH
and /share/apps/NYUAD
directory as well as the current working directory $PWD
. To specify additional directories, use the SINGULARITY_BINDPATH
environment variable or the --bind
or -B
command line option.
Tip
To access cluster filesystem in the container, it is convenient to pre-create these folders in your container
mkdir /scratch
mkdir /share/apps/NYUAD
export SINGULARITY_BINDPATH="/scratch,$TMPDIR"
#or
singularity --bind "/scratch,$TMPDIR" [commands]
Singularity Overlays¶
You can use the singularity overlays to have a writable filesystem on the top of your existing container. This is useful in the following scenarios:
Install applications on the top of an existing container
if you have directories which generate/have a large number of smaller files (order of 100K).
conda installations which consume the number of files quota
You can use the overlay filesystem with your existing container as follows:
singularity shell --overlay overlay.ext3 container.sif
For more info info on overlays, Kindly look at the links below:
Singularity Overlays on HPC
GPU in a container¶
If your container has been compiled with CUDA version >= 9, it should work with the local GPUs. Just add the --nv
flag to your singularity command.
singularity exec --nv tensorflow-gpu.sif python3
Sample job script¶
#!/bin/bash
#your SBATCH commands go here
#SBATCH -n 10
# execute the default runscript defined in the container
singularity run tensorflow.sif
# execute a command within container
# the command should include absolute path if the command is not in the default search path
singularity exec tensorflow.sif /scratch/wz22/run.sh