Chapter 5. Working with containerized services
This chapter provides some examples of commands to manage containers and how to troubleshoot your OpenStack Platform containers
5.1. Managing containerized services Copy linkLink copied to clipboard!
Red Hat OpenStack Platform (RHOSP) runs services in containers on the undercloud and overcloud nodes. In certain situations, you might need to control the individual services on a host. This section contains information about some common commands you can run on a node to manage containerized services.
Listing containers and images
To list running containers, run the following command:
sudo podman ps
$ sudo podman ps
To include stopped or failed containers in the command output, add the --all option to the command:
sudo podman ps --all
$ sudo podman ps --all
To list container images, run the following command:
sudo podman images
$ sudo podman images
Inspecting container properties
To view the properties of a container or container images, use the podman inspect command. For example, to inspect the keystone container, run the following command:
sudo podman inspect keystone
$ sudo podman inspect keystone
Managing containers with Systemd services
Previous versions of OpenStack Platform managed containers with Docker and its daemon. In OpenStack Platform 16, the Systemd services interface manages the lifecycle of the containers. Each container is a service and you run Systemd commands to perform specific operations for each container.
It is not recommended to use the Podman CLI to stop, start, and restart containers because Systemd applies a restart policy. Use Systemd service commands instead.
To check a container status, run the systemctl status command:
To stop a container, run the systemctl stop command:
sudo systemctl stop tripleo_keystone
$ sudo systemctl stop tripleo_keystone
To start a container, run the systemctl start command:
sudo systemctl start tripleo_keystone
$ sudo systemctl start tripleo_keystone
To restart a container, run the systemctl restart command:
sudo systemctl restart tripleo_keystone
$ sudo systemctl restart tripleo_keystone
Because no daemon monitors the containers status, Systemd automatically restarts most containers in these situations:
-
Clean exit code or signal, such as running
podman stopcommand. - Unclean exit code, such as the podman container crashing after a start.
- Unclean signals.
- Timeout if the container takes more than 1m 30s to start.
For more information about Systemd services, see the systemd.service documentation.
Any changes to the service configuration files within the container revert after restarting the container. This is because the container regenerates the service configuration based on files on the local file system of the node in /var/lib/config-data/puppet-generated/. For example, if you edit /etc/keystone/keystone.conf within the keystone container and restart the container, the container regenerates the configuration using /var/lib/config-data/puppet-generated/keystone/etc/keystone/keystone.conf on the local file system of the node, which overwrites any the changes that were made within the container before the restart.
Monitoring podman containers with Systemd timers
The Systemd timers interface manages container health checks. Each container has a timer that runs a service unit that executes health check scripts.
To list all OpenStack Platform containers timers, run the systemctl list-timers command and limit the output to lines containing tripleo:
To check the status of a specific container timer, run the systemctl status command for the healthcheck service:
To stop, start, restart, and show the status of a container timer, run the relevant systemctl command against the .timer Systemd resource. For example, to check the status of the tripleo_keystone_healthcheck.timer resource, run the following command:
sudo systemctl status tripleo_keystone_healthcheck.timer
$ sudo systemctl status tripleo_keystone_healthcheck.timer
● tripleo_keystone_healthcheck.timer - keystone container healthcheck
Loaded: loaded (/etc/systemd/system/tripleo_keystone_healthcheck.timer; enabled; vendor preset: disabled)
Active: active (waiting) since Fri 2019-02-15 23:53:18 UTC; 2 days ago
If the healthcheck service is disabled but the timer for that service is present and enabled, it means that the check is currently timed out, but will be run according to timer. You can also start the check manually.
The podman ps command does not show the container health status.
Checking container logs
OpenStack Platform 16 introduces a new logging directory /var/log/containers/stdout that contains the standard output (stdout) all of the containers, and standard errors (stderr) consolidated in one single file for each container.
Paunch and the container-puppet.py script configure podman containers to push their outputs to the /var/log/containers/stdout directory, which creates a collection of all logs, even for the deleted containers, such as container-puppet-* containers.
The host also applies log rotation to this directory, which prevents huge files and disk space issues.
In case a container is replaced, the new container outputs to the same log file, because podman uses the container name instead of container ID.
You can also check the logs for a containerized service with the podman logs command. For example, to view the logs for the keystone container, run the following command:
sudo podman logs keystone
$ sudo podman logs keystone
Accessing containers
To enter the shell for a containerized service, use the podman exec command to launch /bin/bash. For example, to enter the shell for the keystone container, run the following command:
sudo podman exec -it keystone /bin/bash
$ sudo podman exec -it keystone /bin/bash
To enter the shell for the keystone container as the root user, run the following command:
sudo podman exec --user 0 -it <NAME OR ID> /bin/bash
$ sudo podman exec --user 0 -it <NAME OR ID> /bin/bash
To exit the container, run the following command:
exit
# exit
5.2. Troubleshooting containerized services Copy linkLink copied to clipboard!
If a containerized service fails during or after overcloud deployment, use the following recommendations to determine the root cause for the failure:
Before running these commands, check that you are logged into an overcloud node and not running these commands on the undercloud.
Checking the container logs
Each container retains standard output from its main process. This output acts as a log to help determine what actually occurs during a container run. For example, to view the log for the keystone container, use the following command:
sudo podman logs keystone
$ sudo podman logs keystone
In most cases, this log provides the cause of a container’s failure.
Inspecting the container
In some situations, you might need to verify information about a container. For example, use the following command to view keystone container data:
sudo podman inspect keystone
$ sudo podman inspect keystone
This provides a JSON object containing low-level configuration data. You can pipe the output to the jq command to parse specific data. For example, to view the container mounts for the keystone container, run the following command:
sudo podman inspect keystone | jq .[0].Mounts
$ sudo podman inspect keystone | jq .[0].Mounts
You can also use the --format option to parse data to a single line, which is useful for running commands against sets of container data. For example, to recreate the options used to run the keystone container, use the following inspect command with the --format option:
sudo podman inspect --format='{{range .Config.Env}} -e "{{.}}" {{end}} {{range .Mounts}} -v {{.Source}}:{{.Destination}}{{if .Mode}}:{{.Mode}}{{end}}{{end}} -ti {{.Config.Image}}' keystone
$ sudo podman inspect --format='{{range .Config.Env}} -e "{{.}}" {{end}} {{range .Mounts}} -v {{.Source}}:{{.Destination}}{{if .Mode}}:{{.Mode}}{{end}}{{end}} -ti {{.Config.Image}}' keystone
The --format option uses Go syntax to create queries.
Use these options in conjunction with the podman run command to recreate the container for troubleshooting purposes:
OPTIONS=$( sudo podman inspect --format='{{range .Config.Env}} -e "{{.}}" {{end}} {{range .Mounts}} -v {{.Source}}:{{.Destination}}{{if .Mode}}:{{.Mode}}{{end}}{{end}} -ti {{.Config.Image}}' keystone )
sudo podman run --rm $OPTIONS /bin/bash
$ OPTIONS=$( sudo podman inspect --format='{{range .Config.Env}} -e "{{.}}" {{end}} {{range .Mounts}} -v {{.Source}}:{{.Destination}}{{if .Mode}}:{{.Mode}}{{end}}{{end}} -ti {{.Config.Image}}' keystone )
$ sudo podman run --rm $OPTIONS /bin/bash
Running commands in the container
In some cases, you might need to obtain information from within a container through a specific Bash command. In this situation, use the following podman command to execute commands within a running container. For example, to run a command in the keystone container:
sudo podman exec -ti keystone <COMMAND>
$ sudo podman exec -ti keystone <COMMAND>
The -ti options run the command through an interactive pseudoterminal.
Replace <COMMAND> with your desired command. For example, each container has a health check script to verify the service connection. You can run the health check script for keystone with the following command:
sudo podman exec -ti keystone /openstack/healthcheck
$ sudo podman exec -ti keystone /openstack/healthcheck
To access the container’s shell, run podman exec using /bin/bash as the command:
sudo podman exec -ti keystone /bin/bash
$ sudo podman exec -ti keystone /bin/bash
Exporting a container
When a container fails, you might need to investigate the full contents of the file. In this case, you can export the full file system of a container as a tar archive. For example, to export the keystone container’s file system, run the following command:
sudo podman export keystone -o keystone.tar
$ sudo podman export keystone -o keystone.tar
This command create the keystone.tar archive, which you can extract and explore.