Skip to content Skip to main navigation Report an accessibility issue
High Performance & Scientific Computing

Singularity User Guide on ISAAC-NG



Introduction

Singularity is a container platform designed for high-performance computing (HPC) clusters. Containers allow the packaging of software and their associated environments (OS, libraries, tools, etc.) within an encapsulated file that can be executed on diverse systems without explicitly porting or integrating with the OS, libraries, or environment specific to the host system.

Singularity version 3.8.6-1.el8 is available on Open Enclave login and compute nodes in the default environment. Loading a module to access singularity on the Open Enclave is unnecessary.

[Netid@login2 ~]$ which singularity
/usr/bin/singularity
[Ntid@login2 ~]$ singularity --version
singularity version 3.8.6-1.el8

Figure 1.1 – check singularity version

Please refer to the documentation for Singularity version 3.8 to obtain information on this container platform’s general use and capabilities. Additionally, consult the main page for the various Singularity commands. You may view these commands by executing singularity --help. The remaining documentation will provide information specific to the implementation of Singularity on the Open Enclave.

Building or Obtaining Container Images

Singularity on the Open Enclave can only be run with user-level permissions. In short, this means containers run on the Open Enclave will be executed in user space. As a result, building or modifying a Singularity container on the Open Enclave is impossible because this operation requires root-level permissions. Please see the Singularity documentation for more information on building or modifying containers.

However, Singularity containers can be obtained by the user. Containers can be directly copied to the system from another source or pulled from container repositories such as Sylabs.io Container LibraryDocker HubSingularity Hub, or GitHub. This documentation will limit the discussion to building and running containers in the compressed read-only Singularity Image File (SIF) format.

The Singularity build command can pull existing containers from the Sylabs.io Container Library, Docker Hub, or Singularity Hub. In the case of Docker Hub, the Docker container is converted to SIF format for use with Singularity. To build a Singularity container from an existing one, use the command shown in Figure 2.1.

singularity build <Container-Name> <URI-to-Container>

Figure 2.1 – singularity build Command Syntax

In this case, the <Container-Name> argument is the path and name for the container to be built. The <URI-to-Container> is the location of the container to download. 

  • URI beginning with docker:// to build from Docker Hub
  • URI beginning with shub:// to build from Singularity Hub
singularity build docker docker://docker

Figure 2.2 – singularity build Docker command

You can download a container from the Container Library using the build command.

singularity build lolcow.sif library://lolcow

Figure 2.3 – download a container by build command_1

The first argument lolcow.sif specifies a path and name for your container. The second argument library://lolcow gives the Container Library URI from which to download. By default the container will be converted to a compressed, read-only SIF.

You can also use build to download layers from Docker Hub and assemble them into SingularityCE containers.

singularity build lolcow.sif docker://sylabsio/lolcow

Figure 2.4 – download a container by build command_2

Alternatively, the remote container can be pulled via the singularity pull command. You may download a native Singularity image with its default name from Singularity Hub using the command shown in Figure 2.3.

singularity pull shub://vsoch/hello-world

Figure 2.6 – singularity pull Command Syntax

shub://vsoch/hello-world Is the most general query and returns the most recent image for the collection. This coincides with the tag latest. The newly downloaded image file is hello-world_latest.sif. You can also pull the image with a customized name like hello.sif

[Netid@login2 ~]$ singularity pull --name hello.sif shub://vsoch/hello-world
[Netid@login2 ~]$ ls
basic_env  docker_latest.sif  hello.sif  lolcow.sif   
docker  hello-world_latest.sif  lolcow.simg

Figure 2.7 – singularity pull .sif Command example

Here is another example to show you can pull an image from Docker Hub:

[Netid@login2 ~]$ singularity pull docker://godlovedc/lolcow
INFO:    Converting OCI blobs to SIF format
INFO:    Starting build...
Getting image source signatures
Copying blob 9fb6c798fa41 done  
Copying blob 3b61febd4aef done 
...
Writing manifest to image destination
Storing signatures
INFO:    Creating SIF file...

Figure 2.8 – singularity pull an image command from Docker Hub

Using Container Images

Interactive Shells

The <Container-Name> argument is the path and name of the container file. This argument can contain a remote container’s URI, where an ephemeral container is used and then removed.

In this mode, changes made to filesystem within the container are not kept once the instance (shell) is closed. The shell can be closed by issuing the exit command.

To access a container interactively, use the command shown in Figure 3.1.

[Netid@login2 ~]$ singularity shell hello-world_latest.sif
Singularity> pwd
/nfs/home/Netid
Singularity> ls
Environments  basic_env  docker  docker_latest.sif  hello-world_latest.sif  hello.sif lolcow.sif  lolcow.simg lolcow_latest.sif requirements.txt
Singularity> id
uid=11015(jcui) gid=3319(tug2106) groups=3319(tug2106),3232(utsoftware),3294(utksoftware),3302(tug2089),3396(tug2170),3653(tug2412)
Singularity> exit
[Netid@login2 ~]$

Figure 3.1 – singularity shell Command Syntax

Executing Commands

To run a command from within the container, use the singularity exec command. Figure 3.2 shows the syntax for this command.

[Netid@login2 ~]$ singularity exec hello-world.sif cat /etc/os-release
NAME="Ubuntu"
VERSION="14.04.6 LTS, Trusty Tahr"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 14.04.6 LTS"
VERSION_ID="14.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
[Netid@login2 ~]$ singularity exec hello-world_latest.sif ls /
bin  boot  dev	environment  etc  home	lib  lib64  lustre  media  mnt	nfs  opt  proc rawr.sh  root  run  sbin  singularity  srv  sys  tmp  usr  var

Figure 3.2 – singularity exec Command Syntax

The <Container-Name> argument is the path and name of the container file, the <Command> argument is the command to execute, and the <Args> are passed to the command. As with an interactive shell, the <Container-Name> can contain a remote container’s URI, where an ephemeral container is used and then removed. As an example, Figure 3.3 shows a CentOS container being used to execute the echo command.

[Netid@login2 ~]$ singularity build centos-container.sif library://lolcow
INFO:    Starting build...
INFO:    Using cached image
INFO:    Verifying bootstrap image /nfs/home/jcui/.singularity/cache/library/sha256.cef378b9a9274c20e03989909930e87b411d0c08cf4d40ae3b674070b899cb5b
INFO:    Creating SIF file...
INFO:    Build complete: centos-container.sif
[Netid@login2 ~]$ singularity exec ~/centos-container.sif echo “Hello, world”
“Hello, world”

Figure 3.3 – Using a Container to Execute a Command

Running Scripts

A Singularity container can contain runscripts with commands that should be executed once the container is run or executed. More information on runscripts is available in the Singularity documentation. To execute a runscript in a container, use the command shown in Figure 3.4.

[Netid@login2 ~]$ singularity run hello-world_latest.sif
RaawwWWWWWRRRR!! Avocado!

Figure 3.4 – singularity run Command Syntax

Accessing External Files within Containers

By default, Singularity bind mounts the $HOME, /tmp, and $PWD directories within the container at runtime. On Open Enclave resources the user’s $HOME, /lustre/haven/user, and /lustre/haven/proj directories are bind mounted within the container at runtime. The user can access the files for which they have access permissions within these directories and their subdirectories in the container. Changes to these files or the creation of new files will be reflected in the bound directories outside of the container. Therefore, these changes will persist once the container exits.

If additional external locations are needed from within the container, the --bind option can be used to define the additional bind mounts. The general syntax for the –bind option is displayed in Figure 4.1.

singularity shell --bind <External>:<Internal> <Container-Name>
singularity run --bind <External>:<Internal> <Container-Name>
singularity exec --bind <External>:<Internal> <Container-Name>

Figure 4.1 – Syntax of the –bind Option

The <External> argument is the directory path available outside the container and <Internal> is the directory path it will be bound to within the container. The <Container-Name> argument is the path and name of the container file. If :<Internal> is not given, then the <External> directory is bind mounted to the same location within the container. Multiple bind mounts can be defined by utilizing a comma-delimited list. The syntax for specifying multiple bind mounts is demonstrated in Figure 4.2.

singularity shell --bind <External>:<Internal>,<External_1>:<Internal_1>,<External_2>:<Internal_2> <Container-Name>
singularity run --bind <External>:<Internal>,<External_1>:<Internal_1>,<External_2>:<Internal_2> <Container-Name>
singularity exec --bind <External>:<Internal>,<External_1>:<Internal_1>,<External_2>:<Internal_2> <Container-Name>

Figure 4.2 – Specifying Multiple Bind Mounts

The arguments all have the same meaning when multiple bind mounts are given.

This option can be used with the shell, exec, and run singularity commands. On the Open Enclave, bind mounts do not need the <Internal> mount points present by default.

An environmental variable, SINGULARITY_BIND, may also be defined to pass bind mount information without using the options. This is especially helpful when executing the container directly with ./<Container-Name>, where <Container-Name> is the path and name of the container file. You may set this variable with the export command. Figure 4.3 shows how to use this command to set the SINGULARITY_BIND variable. After you set the variable, interact with, execute a command in, or run the container to see the results.

export SINGULARITY_BIND= “<External>:<Internal>,<External_1>:<Internal_1>,<External_2>:<Internal_2>”

Figure 4.3 – Setting the SINGULARITY_BIND Variable

Utilizing Containers in Batch Jobs

Like any other command, you can also use Singularity images within a non-interactive batch script. If your image contains a run-script, then you can use singularity run to execute the run-script in the job. You can also use singularity exec to execute arbitrary commands (or scripts) within the image. Below is an example batch-job submission script using the hello-world.sif to print out information about the native OS of the image.

singularity exec [exec options...] <container> <command>
[Netid@login2 ~]$ singularity exec hello-world.sif /bin/echo Hello World!
Hello World!

Figure 5.1 – Singularity exec commands

The difference between singularity run and singularity exec

Above we used the singularity exec command. In earlier episodes of this course we used singularity run. To clarify, the difference between these two commands is:

  • singularity run: This will run the default command set for containers based on the specfied image. This default command is set within the image metadata when the image is built (we’ll see more about this in later episodes). You do not specify a command to run when using singularity run, you simply specify the image file name. As we saw earlier, you can use the singularity inspect command to see what command is run by default when starting a new container based on an image.
  • singularity exec: This will start a container based on the specified image and run the command provided on the command line following singularity exec <image file name>. This will override any default command specified within the image metadata that would otherwise be run if you used singularity run.

If your target system is setup with a batch system such as SLURM, a standard way to execute MPI applications is through a batch script. The following example illustrates the context of a batch script for Slurm that aims at starting a Singularity container on each node allocated to the execution of the job. It can easily be adapted for all major batch systems available.

#!/bin/bash
#SBATCH -J singularity_test           
#SBATCH -A ACF-UTK0011                
#SBATCH --nodes=1                     
#SBATCH --ntasks-per-node=48          
#SBATCH --partition=campus            
#SBATCH --time=0-01:00:00             
#SBATCH --error=singularity_test.err           
#SBATCH --output=singularity_test.out           
#SBATCH --qos=campus

# Singularity command line options
singularity exec hello-world.sif cat /etc/os-release                                              

Figure 5.2 – Singularity Commands to Use in a Batch Script

If the above batch-job script is named singularity_test, for instance, the job is submitted as usual with sbatch:

[Netid@login2 ~]$ sbatch singularity_test

Figure 5.3 – batch job command

[Netid@login2 ~]$ cat singularity_test.out
NAME="Ubuntu"
VERSION="14.04.6 LTS, Trusty Tahr"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 14.04.6 LTS"
VERSION_ID="14.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"

Figure 5.4 – cat command for singularity_test job script

The shell command should not be used since this is an interactive command and will not work well within a batch script. Please refer to the Using Container Images section for the correct syntax of the exec and run commands.

The singularity command is serial and will not launch tasks on multiple nodes. Therefore, batch scripts should only be submitted to one node.

IMPORTANT:

It is essential that you pull Singularity Hub containers before you interact with an image binary directly. Singularity Hub now has strict limits for its API use, so if you run an `exec`, `shell`, or `run` directly to a `shub://` unique resource identifier, especially on a supercomputer, you will likely use up your weekly quota immediately.

You should never issue any of the following commands

singularity run shub://vsoch/hello-world
singularity shell shub://vsoch/hello-world
singularity exec shub://vsoch/hello-world ls /

As each command will use up one download of your container, and you are limited to a weekly quota.

Utilizing Containers with Parallel Batch Jobs using MPI

Parallel applications within containers can be launched using the mpirun command on the Open Enclave. However, the MPI application should be built using an MPICH compatible MPI library and not OpenMPI so that the Open Enclave’s default mpirun command can be used.

To use mpirun with Singularity, adapt the command shown in Figure 6.1 to your use case.

mpirun <mpirun-Options> singularity exec <Singularity-Options> <Container-Name> <Command> <Args>

Figure 6.1 – Using mpirun with Singularity

The <mpirun-Options> are the typical mpirun options for the application. <Singularity-Options> are the options discussed above for Singularity such as binding additional paths. <Container-Name> is the path and name of the container file. <Command> is the executable path and name within the container along with its associated arguments, <Args>.

However, if the application built within the container utilizes an OpenMPI compatible MPI library, then the mpirun utilized must be OpenMPI compatible. This is done by swapping the environment and loading the openmpi module. Execute module avail openmpi to get a list of installed versions of openmpi. Then, use module swap PE-intel PE-gnu to change from the Intel programming environment to the GNU environment. Next, execute module load openmpi to use the OpenMPI MPI library. Follow the same syntax for the mpirun command given above to use it with OpenMPI.

The singularity command exec is shown above to run a specific command or executable within the container. However, the singularity run command may also be utilized depending upon the nature of the runscript it executes. If this executes an MPI build binary, then this should also work. However, other commands within the script will be executed in a duplicative manner for each MPI task launched.

Singularity MPI Usage Example

To demonstrate the usage of Singularity with MPI applications, an NWChem Docker container can be retrieved from Docker Hub and executed with NWChem’s single point SCF energy sample file. For the purposes of this example, create a directory in your home directory named Singularity-MPI-Tests. Enter the directory, then execute singularity pull docker://nwchemorg/nwchem-dev to retrieve the container image. Next, create an input file named h2o.nw with the contents given in Figure 6.2. Please note that this sample comes from NWChem’s user documentation and should only be used for testing purposes.

start h2o
title "Water in 6-31g basis set"
 
geometry units au
   O      0.00000000    0.00000000    0.00000000
   H      0.00000000    1.43042809   -1.10715266
   H      0.00000000   -1.43042809   -1.10715266
end
basis
   H library 6-31g
   O library 6-31g
end
task scf optimize freq

Figure 6.2 – NWChem Input File Sample

Submit an interactive job to the Open Enclave that requests two nodes. To learn how to submit an interactive job, please review the Running Jobs document. Use the pwd command to verify that you are in the directory that contains the NWChem container and the input file. Run the container in parallel with the command shown in Figure 6.3.

mpirun -n <procs> singularity exec <Container-Name> nwchem h2onw &> h2o.log

Figure 6.3 – Executing a Container in Parallel with mpirun

Replace <procs> with the number of processors allocated to your job. For example, if you land on two Beacon nodes, you will have thirty-two processors. Replace the <Container-Name> argument with the name of the NWChem container. After the execution completes, use the cat or view commands to review the contents of the h2o.log file. The beginning of the file should state the number of processors you specified with the mpirun -n command.

[Netid@login2 ~]$ mpirun -n 32 singularity exec NWChem nwchem h2onw &> h2o.log
[Netid@login2 ~] $ view
Figure 6.4 – the command to view h2o.log file