Apptainer (Singularity)

Introduction

Singularity is a container technology, also known as light virtualization (as the performances overhead compared to bare metal is close to zero), orginally developped for the HPC by the Laurence Berkeley National Laboratory, U.S.A.

It is a recent, rapidly developing product with an important growing users community. The side effect of this interest is that features are quickly evolving.

Note

The recent Singularity versions installed are coming from the opensource fork called Apptainer. The version numbering evolved, starting at 1.0.0 which corresponds to Singularity 3.9.5. The main features are however preserved.

Use of Apptainer

Apptainer is provided at CC-IN2P3 on all the interactive, as well as on all the computing servers. The LHC experiments are interested to that technology. Therefore CC-IN2P3 decided to follow WLCG recommendations and provides the latest version taken from the EPEL repository. This version is de facto the default version provided by the CC-IN2P3.

Note

Different releases are however available at CC-IN2P3. Please follow the software loader syntax to list and activate the required release.

Attention

Even though you are using apptainer, you may still use the singularity name to call the binary, as a wrapper is provided. Note however that it is not maintained by CC-IN2P3 staff, and thus may be removed at some point.

Image repository

CC-IN3P3 provides a Singularity/Apptainer image repository (see Use and images management policy for more details) containing the images for the major versions of GNU/Linux distributions.

Two main images types can be found, according to which computing platform they are most suited for: the HTC cluster or the HPC one (that last category including GPU specific images). Some images can also contain more specific software according to specific requests (check the name and/or the metadata of the image).

% ls /cvmfs/singularity.in2p3.fr/images/
HPC/  HTC/
% ls /cvmfs/singularity.in2p3.fr/images/HTC/
sl6/  ubuntu/

Most of the images are in a SquashFS format (simg or sif extension for Singularity v2 or v3 respectively). Our user support will help you to solve any problem you may encounter when using one of these images.

Invoking a container

Run a script or a command within an image as follow:

% apptainer exec /cvmfs/singularity.in2p3.fr/images/HTC/sl6/sl6-cc-atlas.simg ls

Here we invoke a container in which we issue the command ls, whose result will be shown on the screen. Once the command is executed, the container is destroyed. For more details, please refer to the official documentation Singularity exec.

In the next example, we invoke a container in which we request a shell. In this case, we now have the specific OS environment of the chosen image.

% apptainer shell /cvmfs/singularity.in2p3.fr/images/HTC/sl6/sl6-cc-atlas.simg
Singularity> cat /etc/redhat-release
Scientific Linux release 6.10 (Carbon)
Singularity>
% apptainer shell /cvmfs/singularity.in2p3.fr/images/HTC/ubuntu/ubuntu1804-CC3D.simg
Singularity> cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04.1 LTS"
Singularity>

Note

An exit command or a CTRL+D will be enough to exit (destroy) the container and return to the original shell.

Warnings may appear during the container invocation. This usually comes from image configuration incoherencies or more often from errors while importing the shell preferences within the container. However this does not prevent the container to run.

With the current version of Singularity, during the container invocation, we need to declare all the external mount points we want to find inside the container itself. For instance, we may want to have access to the /sps storage. To do so, we usually use the bind option as follow:

--bind <filesystem to mount>:<mount point inside the container (must exist in the image)>

A default mount point usually exist, it is /srv. Then, in case we want to keep the avalaibility inside the container of a directory we have access to ( here in the exemple /sps/phenix), one can do:

% ls /sps/phenix/
Run2pp/  Run3dAu/  Run3pp/  Run4AuAu/  Run7AuAu/
% apptainer shell --bind /sps/phenix:/srv /cvmfs/singularity.in2p3.fr/images/HTC/sl6/sl6-cc-atlas.simg
Singularity> ls /srv
Run2pp/  Run3dAu/  Run3pp/  Run4AuAu/  Run7AuAu/
Singularity>

Official CC-IN2P3 images also provide a /sps mount point. Two different filesystems may be mounted inside the container:

% apptainer shell --bind /sps/phenix:/sps --bind /pbs/throng/phenix:/srv /cvmfs/singularity.in2p3.fr/images/HTC/sl6/sl6-cc-atlas.simg

Note

The PATH in all the scripts run inside the container must be carefully updated.

Attention

On the contrary to the default Singularity configuration, the user $HOME is not automatically mounted within the container.

Like said above, the $HOME mount is not done anymore within the container at CC-IN2P3. If one wants have to access it, one would have to use the --bind or -B option in order to mount it within the container. The $HOME directory being in PBS just like the $THRONG directories, it would be a good idea to mount the root of /pbs in the container, which would then allow to access both the $HOME and the $THRONG directories at once.

Submitting a job into a container

The first method shown above may be followed to submit a job into the computing cluster, in other words one may request to execute a script inside the container. For example, consider the script my_script.sh, which will need an SL6 environment. The script will, of course, have the permissions to be run:

% chmod u+x my_script.sh

The script my_job_apptainer.sh, which will be submitted to the computing platform, will have the following syntax:

#!/usr/bin/env bash
apptainer exec /cvmfs/singularity.in2p3.fr/images/HTC/sl6/sl6-cc-atlas.simg $HOME/my_script.sh

As a result, submitting my_job_apptainer.sh will allow my_script.sh to run inside an SL6 container, which is itself running in the compute platform environment.

Important

If a user wants to submit a job GPU taking drivers into account through a Singularity/Apptainer image, he must add the --nv parameter to the execution line:

apptainer exec --nv [other options] [image] [script]

Another useful command to run script within a container is run, please refer to the official documentation for details Singularity run.

For more detailed information, please refer to the official documentation, or contact the user support.

To go even further, you can refer to the documents used during the trainings provided by the CC-IN2P3.

Image use and management

The images provided by CC-IN2P3 have been tested and are available from CVMFS:

% ls /cvmfs/singularity.in2p3.fr/images/
HPC/  HTC/
% ls /cvmfs/singularity.in2p3.fr/images/HTC/
sl6/  ubuntu/

You are allowed to create and import your own images. In order to minimize the performances penalties when running large size images, it is recommended to store them in a storage which exhibits good I/O performances, and to avoid compressed archives.

To answer this constraint, CC-IN2P3 is providing a stratum 0 area in CVMFS specifically created to fulfil this needs. The storage should better be structured in a directory tree allowing the use of an advanced CVMFS feature to read file by file to optimizing the I/O performances. The repository is the one discussed above.

To import your images inside the repository, please contact the user support.

CC-IN2P3 cannot guarantee that these ‘non official’ images will smoothly run on the CC-IN2P3 computing platform. However our experts will do their best to provide support and help you solve your problems.

Another type of image that would be interesting to use is Docker images. Singularity can indeed invoke Docker images and convert them into singularity images if needed. For more information, please refer to the section Singularity and Docker in the official documentation.