Singularity

Introduction

Singularity is a container technology, also known as light virtualization (as the performances overhead compared to bare metal is close to zero), orginally developped for the HPC by the Laurence Berkeley National Laboratory, U.S.A.

It is a recent, rapidly developing product with an important growing users community. The side effect of this interest is that features are quickly evolving.

Use of Singularity

Singularity is provided at CC-IN2P3 on all the interactive, as well as on all the compute servers.

Version availability

The LHC experiments are interested to that technology. Therefore CC-IN2P3 decided to follow the versions released by WLCG. The current available version is 2.6.1-dist, taken from the EPEL repository. More recent versions are available by using ccenv:

% ccenv singularity --list

Image repository

CC-IN2P3 provides an images repository (see Use and images management policy for more details), and images for the major versions of the following GNU/Linux distributions:

  • Red Hat Enterprise Linux (or Scientific Linux)
  • CentOS
  • Ubuntu
  • Debian

Two main images’ types can be found, according to which computing platform they are most suited for: the HTC cluster or the HPC one (that last category including GPU specific images). Some images can also contain more specific software according to specific requests (check the name and/or the metadata of the image).

% ls /cvmfs/singularity.in2p3.fr/images/
HPC/  HTC/
% ls /cvmfs/singularity.in2p3.fr/images/HTC/
sl6/  ubuntu/

Most of the images are in a SquashFS format (simg or sif extension for Singularity v2 or v3 respectively).

Invoking a container

Run a script or a command within an image as follows:

% singularity exec /cvmfs/singularity.in2p3.fr/images/HTC/sl6/sl6-cc-atlas.simg ls

Here we invoke a container in which we issue the command ls, whose result will be shown on the screen. Once the command is executed, the container is destroyed. For more details, please refer to the official documentation Singularity exec.

In the next examples, we invoke a container in which we request a shell. In this case, we now have the specific OS environment of the chosen image.

% singularity shell /cvmfs/singularity.in2p3.fr/images/HTC/sl6/sl6-cc-atlas.simg
Singularity: Invoking an interactive shell within container...
Singularity sl6-cc-atlas_v0.2.simg:/> cat /etc/redhat-release
Scientific Linux release 6.10 (Carbon)
% singularity shell /cvmfs/singularity.in2p3.fr/images/HTC/ubuntu/ubuntu1804-CC3D.simg
Singularity: Invoking an interactive shell within container...
Singularity ubuntu1804-CC3D.simg:/> cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04.1 LTS"

Warnings may appear during the container invocation. This usually comes from image configuration incoherencies or more often from errors while importing the shell preferences within the container. However this does not prevent the container to run.

With the current version of Singularity (version 2.6.1-dist), during the container invocation, we need to declare all the external mount points we want to find inside the container itself. For instance, we may want to have access to the /sps storage. To do so, we usually use the bind option as follow:

--bind <filesystem to mount>:<mount point inside the container (must exist in the image)>

A default mount point usually exist, it is /srv. Then, in case we want to keep the avalaibility of /sps/hep/phenix inside the container, one can do:

% ls /sps/hep/phenix/
Run2pp/  Run3dAu/  Run3pp/  Run4AuAu/  Run7AuAu/
% singularity shell --bind /sps/hep/phenix:/srv /cvmfs/singularity.in2p3.fr/images/HTC/sl6/sl6-cc-atlas.simg
Singularity: Invoking an interactive shell within container...
sh-4.1$ ls /srv
Run2pp/  Run3dAu/  Run3pp/  Run4AuAu/  Run7AuAu/
sh-4.1$

Official CC-IN2P3 images also provide a /sps mount point. Two different filesystems may be mounted inside the container:

% singularity shell --bind /sps/hep/phenix:/sps --bind /pbs/throng/phenix:/srv /cvmfs/singularity.in2p3.fr/images/HTC/sl6/sl6-cc-atlas.simg

Note

The PATH in all the scripts run inside the container must be carefully updated.

Warning

On the contrary to the default Singularity configuration, the user $HOME is not automatically mounted within the container.

Like said above, the $HOME mount is not done anymore within the container at CC-IN2P3. If one wants have to access it, one would have to use the - - bind or -B option in order to mount it within the container. The $HOME directory being in PBS just like the $THRONG directories, it would be a good idea to mount the root of /pbs in the container, which would then allow to access both the $HOME and the $THRONG directories at once.

Submitting a job into a container

The first method shown above may be followed to submit a job into the computing cluster, in other words one may request to execute a script inside the container. For instance, here is the script mon_job_singularity.sh I want to run (note that the idea is to run my_script.sh inside a SL6 container):

#!/usr/bin/env bash
singularity exec /cvmfs/singularity.in2p3.fr/images/HTC/sl6/sl6-cc-atlas.simg $HOME/my_script.sh

then, I may finally submit the following job:

% qsub -q long -l os=cl7 mon_job_singularity.sh

At the end, we are running the script my_script.sh inside a SL6 container, which is run in a CentOS-7 environment.

Another useful Singularity command to run script within a container is run, please refer to the official documentation for details Singularity run.

For more detailed information, please refer to the official documentation, or contact the user support.

Image use and management

CC-IN2P3 is providing images for the major release versions of the following GNU/Linux distributions:
  • Red Hat Enterprise Linux (or Scientific Linux)
  • CentOS
  • Ubuntu
  • Debian.

These images have been tested and are available from CVMFS:

% ls /cvmfs/singularity.in2p3.fr/images/
HPC/  HTC/
% ls /cvmfs/singularity.in2p3.fr/images/HTC/
sl6/  ubuntu/

The user support can help you to solve any problem you may encounter when using one of these images.

You are allowed to create and import your own images. In order to minimize the performances penalties when running large size images, it is recommended to store them in a storage which exhibits good I/O performances, and to avoid compressed archives.

To answer this constraint, CC-IN2P3 is providing a stratum 0 area in CVMFS specifically created to fulfil this needs. The storage should better be structured in a directory tree allowing the use of an advanced CVMFS feature to read file by file to optimizing the I/O performances. The repository is the one discussed above.

To import your images inside the repository, please contact the user support.

CC-IN2P3 cannot guarantee that these ‘non official’ images will smoothly run on the CC-IN2P3 computing platform. However our experts will do their best to provide support and help you solve your problems.

Another type of image that would be interesting to use is Docker images. Singularity can indeed invoke Docker images and convert them into singularity images if needed. For more information, please refer to the section Singularity and Docker in the official documentation.