Skip to content Skip to main navigation Report an accessibility issue
High Performance & Scientific Computing

Singularity User Guide on ISAAC Legacy



Introduction

Singularity is a container platform specifically designed for use on high-performance computing (HPC) clusters. Containers allow the packaging of software and their associated environments (OS, libraries, tools, etc.) within an encapsulated file that can be executed on diverse systems without the need to explicitly port or integrate with the OS, libraries, or environment specific to the host system.

Singularity version 3.5.2-1.1.e17 is available on Open Enclave login and compute nodes in the default environment. It is not necessary to load a module to access singularity on the Open Enclave. Please refer to the documentation for Singularity version 3.5 to obtain information on the general use and capabilities of this container platform. Additionally, consult the man pages for the various Singularity commands. You may view these commands by executing singularity --help. The remaining documentation will provide information specific to the implementation of Singularity on the Open Enclave.

Building or Obtaining Container Images

Singularity on the Open Enclave is only available to be run with user-level permissions. In short, this means that containers run on the Open Enclave will be executed in user space. As a result, it is not possible to build or modify a Singularity container on the Open Enclave because this operation requires root level permissions. Please see the Singularity documentation for more information on building or modifying containers.

However, Singularity containers can be obtained by the user. Containers can be directly copied to the system from another source or pulled from container repositories such as Sylabs.io Container LibraryDocker HubSingularity Hub, or GitHub. This documentation will limit the discussion to building and running containers in the compressed read-only Singularity Image File (SIF) format.

The Singularity build command can pull existing containers from the Sylabs.io Container Library, Docker Hub, or Singularity Hub. In the case of Docker Hub, the Docker container is converted to SIF format for use with Singularity. To build a Singularity container from an existing one, use the command shown in Figure 2.1.

singularity build <Container-Name> <URI-to-Container>

Figure 2.1 – singularity build Command Syntax

In this case, the <Container-Name> argument is the path and name for the container to be built. The <URI-to-Container> is the location of the container to download. URIs begin with library:// points to the Syslabs.io Container Library, docker:// points to Docker Hub, and shub:// points to Singularity Hub.

Alternatively, the remote container can be pulled via the singularity pull command. In the case of Docker containers, these are converted to SIF format. To retrieve a remote container, use the command shown in Figure 2.2.

singularity pull <URI-to-Container>

Figure 2.2 – singularity pull Command Syntax

Using Container Images

Interactive Shells

To access a container interactively, use the command shown in Figure 3.1.

singularity shell <Container-Name>

Figure 3.1 – singularity shell Command Syntax

The <Container-Name> argument is the path and name of the container file. This argument can contain a remote container’s URI, where an ephemeral container is used and then removed.

In this mode, changes made to filesystem within the container are not kept once the instance (shell) is closed. The shell can be closed by issuing the exit command.

Executing Commands

To run a command from within the container, use the singularity exec command. Figure 3.2 shows the syntax for this command.

singularity exec <Container-Name> <Command> <Args>

Figure 3.2 – singularity exec Command Syntax

The <Container-Name> argument is the path and name of the container file, the <Command> argument is the command to execute, and the <Args> are passed to the command. As with an interactive shell, the <Container-Name> can contain a remote container’s URI, where an ephemeral container is used and then removed. As an example, Figure 3.3 shows a CentOS container being used to execute the echo command.

singularity exec ~/Containers/centos-container.sif echo “Hello, world”

Figure 3.3 – Using a Container to Execute a Command

Running Scripts

A Singularity container can contain runscripts with commands that should be executed once the container is run or executed. More information on runscripts is available in the Singularity documentation. To execute a runscript in a container, use the command shown in Figure 3.4.

singularity run <Container-Name>

Figure 3.4 – singularity run Command Syntax

The <Container-Name> argument is the path and name of the container file. As with interactive shells and executables, the <Container-Name> can contain a remote container’s URI, where an ephemeral container is used and removed.

Accessing External Files within Containers

By default, Singularity bind mounts the $HOME, /tmp, and $PWD directories within the container at runtime. On Open Enclave resources the user’s $HOME, /lustre/haven/user, and /lustre/haven/proj directories are bind mounted within the container at runtime. The user can access the files for which they have access permissions within these directories and their subdirectories in the container. Changes to these files or the creation of new files will be reflected in the bound directories outside of the container. Therefore, these changes will persist once the container exits.

If additional external locations are needed from within the container, the --bind option can be used to define the additional bind mounts. The general syntax for the –bind option is displayed in Figure 4.1.

singularity [shell | exec | run] --bind <External>:<Internal> <Container-Name>

Figure 4.1 – Syntax of the –bind Option

The <External> argument is the directory path available outside the container and <Internal> is the directory path it will be bound to within the container. The <Container-Name> argument is the path and name of the container file. If :<Internal> is not given, then the <External> directory is bind mounted to the same location within the container. Multiple bind mounts can be defined by utilizing a comma-delimited list. The syntax for specifying multiple bind mounts is demonstrated in Figure 4.2.

singularity [shell | exec | run] --bind <External>:<Internal>,<External_1>:<Internal_1>,<External_2>:<Internal_2> <Container-Name>

Figure 4.2 – Specifying Multiple Bind Mounts

The arguments all have the same meaning when multiple bind mounts are given.

This option can be used with the shell, exec, and run singularity commands. On the Open Enclave, bind mounts do not need the <Internal> mount points present by default.

An environmental variable, SINGULARITY_BIND, may also be defined to pass bind mount information without using the options. This is especially helpful when executing the container directly with ./<Container-Name>, where <Container-Name> is the path and name of the container file. You may set this variable with the export command. Figure 4.3 shows how to use this command to set the SINGULARITY_BIND variable. After you set the variable, interact with, execute a command in, or run the container to see the results.

export SINGULARITY_BIND= “<External>:<Internal>,<External_1>:<Internal_1>,<External_2>:<Internal_2>”

Figure 4.3 – Setting the SINGULARITY_BIND Variable

Utilizing Containers in Batch Jobs

The singularity command is in the default path on Open Enclave compute and login nodes. As a result, no additional modules need to be loaded to access it. Thus, Singularity commands such as exec and run can be utilized within batch scripts. Include either of the commands shown in Figure 5.1 in your batch script. Replace <Container-Name>with the path and name of the container file.

singularity exec <Container-Name>
singularity run <Container-Name>

Figure 5.1 – Singularity Commands to Use in a Batch Script

The shell command should not be used since this is an interactive command and will not work well within a batch script. Please refer to the Using Container Images section for the correct syntax of the exec and run commands.

The singularity command is serial and will not launch tasks on multiple nodes. Therefore, batch scripts should only be submitted to one node.

Utilizing Containers with Parallel Batch Jobs using MPI

Parallel applications within containers can be launched using the mpirun command on the Open Enclave. However, the MPI application should be built using an MPICH compatible MPI library and not OpenMPI so that the Open Enclave’s default mpirun command can be used.

To use mpirun with Singularity, adapt the command shown in Figure 6.1 to your use case.

mpirun <mpirun-Options> singularity exec <Singularity-Options> <Container-Name> <Command> <Args>

Figure 6.1 – Using mpirun with Singularity

The <mpirun-Options> are the typical mpirun options for the application. <Singularity-Options> are the options discussed above for Singularity such as binding additional paths. <Container-Name> is the path and name of the container file. <Command> is the executable path and name within the container along with its associated arguments, <Args>.

However, if the application built within the container utilizes an OpenMPI compatible MPI library, then the mpirun utilized must be OpenMPI compatible. This is done by swapping the environment and loading the openmpi module. Execute module avail openmpi to get a list of installed versions of openmpi. Then, use module swap PE-intel PE-gnu to change from the Intel programming environment to the GNU environment. Next, execute module load openmpi to use the OpenMPI MPI library. Follow the same syntax for the mpirun command given above to use it with OpenMPI.

The singularity command exec is shown above to run a specific command or executable within the container. However, the singularity run command may also be utilized depending upon the nature of the runscript it executes. If this executes an MPI build binary, then this should also work. However, other commands within the script will be executed in a duplicative manner for each MPI task launched.

Singularity MPI Usage Example

To demonstrate the usage of Singularity with MPI applications, an NWChem Docker container can be retrieved from Docker Hub and executed with NWChem’s single point SCF energy sample file. For the purposes of this example, create a directory in your home directory named Singularity-MPI-Tests. Enter the directory, then execute singularity pull docker://nwchemorg/nwchem-dev to retrieve the container image. Next, create an input file named h2o.nw with the contents given in Figure 6.2. Please note that this sample comes from NWChem’s user documentation and should only be used for testing purposes.

start h2o
title "Water in 6-31g basis set"
 
geometry units au
   O      0.00000000    0.00000000    0.00000000
   H      0.00000000    1.43042809   -1.10715266
   H      0.00000000   -1.43042809   -1.10715266
end
basis
   H library 6-31g
   O library 6-31g
end
task scf optimize freq

Figure 6.2 – NWChem Input File Sample

Submit an interactive job to the Open Enclave that requests two nodes. To learn how to submit an interactive job, please review the Running Jobs document. Use the pwd command to verify that you are in the directory that contains the NWChem container and the input file. Run the container in parallel with the command shown in Figure 6.3.

mpirun -n <procs> singularity exec <Container-Name> nwchem h2onw &> h2o.log

Figure 6.3 – Executing a Container in Parallel with mpirun

Replace <procs> with the number of processors allocated to your job. For example, if you land on two Beacon nodes, you will have thirty-two processors. Replace the <Container-Name> argument with the name of the NWChem container. After the execution completes, use the cat or view commands to review the contents of the h2o.log file. The beginning of the file should state the number of processors you specified with the mpirun -n command.