Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 24. Using Podman in HPC environment
You can use Podman with Open MPI (Message Passing Interface) to run containers in a High Performance Computing (HPC) environment.
24.1. Using Podman with MPI Copier lienLien copié sur presse-papiers!
The example is based on the ring.c program taken from Open MPI. In this example, a value is passed around by all processes in a ring-like fashion. Each time the message passes rank 0, the value is decremented. When each process receives the 0 message, it passes it on to the next process and then quits. By passing the 0 first, every process gets the 0 message and can quit normally.
Prerequisites
-
The
container-toolsmodule is installed.
Procedure
Install Open MPI:
yum install openmpi
# yum install openmpiCopy to Clipboard Copied! Toggle word wrap Toggle overflow To activate the environment modules, type:
. /etc/profile.d/modules.sh
$ . /etc/profile.d/modules.shCopy to Clipboard Copied! Toggle word wrap Toggle overflow Load the
mpi/openmpi-x86_64module:module load mpi/openmpi-x86_64
$ module load mpi/openmpi-x86_64Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optionally, to automatically load
mpi/openmpi-x86_64module, add this line to the.bashrcfile:echo "module load mpi/openmpi-x86_64" >> .bashrc
$ echo "module load mpi/openmpi-x86_64" >> .bashrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow To combine
mpirunandpodman, create a container with the following definition:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Build the container:
podman build --tag=mpi-ring .
$ podman build --tag=mpi-ring .Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start the container. On a system with 4 CPUs this command starts 4 containers:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow As a result,
mpirunstarts up 4 Podman containers and each container is running one instance of theringbinary. All 4 processes are communicating over MPI with each other.
24.2. The mpirun options Copier lienLien copié sur presse-papiers!
The following mpirun options are used to start the container:
-
--mca orte_tmpdir_base /tmp/podman-mpirunline tells Open MPI to create all its temporary files in/tmp/podman-mpirunand not in/tmp. If using more than one node this directory will be named differently on other nodes. This requires mounting the complete/tmpdirectory into the container which is more complicated.
The mpirun command specifies the command to start, the podman command. The following podman options are used to start the container:
-
runcommand runs a container. -
--env-hostoption copies all environment variables from the host into the container. -
-v /tmp/podman-mpirun:/tmp/podman-mpirunline tells Podman to mount the directory where Open MPI creates its temporary directories and files to be available in the container. -
--userns=keep-idline ensures the user ID mapping inside and outside the container. -
--net=host --pid=host --ipc=hostline sets the same network, PID and IPC namespaces. -
mpi-ringis the name of the container. -
/home/ringis the MPI program in the container.