Transitioning to Containerized Services
A basic guide to working with OpenStack Platform containerized services
Abstract
Chapter 1. Introduction Copy linkLink copied to clipboard!
Past versions of Red Hat OpenStack Platform used services managed with Systemd. However, more recent version of OpenStack Platform now use containers to run services. Some administrators might not have a good understanding of how containerized OpenStack Platform services operate, and so this guide aims to help you understand OpenStack Platform container images and containerized services. This includes:
- How to obtain and modify container images
- How to manage containerized services in the overcloud
- Understanding how containers differ from Systemd services
The main goal is to help you gain enough knowledge of containerized OpenStack Platform services to transition from a Systemd-based environment to a container-based environment.
1.1. Containerized services and Kolla Copy linkLink copied to clipboard!
Each of the main Red Hat OpenStack Platform (RHOSP) services run in containers. This provides a method to keep each service within its own isolated namespace separated from the host. This has the following effects:
- During deployment, RHOSP pulls and runs container images from the Red Hat Customer Portal.
-
The
podmancommand operates management functions, like starting and stopping services. - To upgrade containers, you must pull new container images and replace the existing containers with newer versions.
Red Hat OpenStack Platform uses a set of containers built and managed with the Kolla toolset.
Chapter 2. Containerized services Copy linkLink copied to clipboard!
Director installs the core OpenStack Platform services as containers on the overcloud. This section provides some background information on how containerized services work.
2.1. Containerized service architecture Copy linkLink copied to clipboard!
Director installs the core OpenStack Platform services as containers on the overcloud. The templates for the containerized services are located in the /usr/share/openstack-tripleo-heat-templates/deployment/.
You must enable the OS::TripleO::Services::Podman service in the role for all nodes that use containerized services. When you create a roles_data.yaml file for your custom roles configuration, include the OS::TripleO::Services::Podman service along with the base composable services. For example, the IronicConductor role uses the following role definition:
2.2. Containerized service parameters Copy linkLink copied to clipboard!
Each containerized service template contains an outputs section that defines a data set passed to the OpenStack Orchestration (heat) service. In addition to the standard composable service parameters, the template contains a set of parameters specific to the container configuration.
puppet_configData to pass to Puppet when configuring the service. In the initial overcloud deployment steps, director creates a set of containers used to configure the service before the actual containerized service runs. This parameter includes the following sub-parameters:
-
config_volume- The mounted volume that stores the configuration. -
puppet_tags- Tags to pass to Puppet during configuration. OpenStack uses these tags to restrict the Puppet run to the configuration resource of a particular service. For example, the OpenStack Identity (keystone) containerized service uses thekeystone_configtag to ensure that all require only thekeystone_configPuppet resource run on the configuration container. -
step_config- The configuration data passed to Puppet. This is usually inherited from the referenced composable service. -
config_image- The container image used to configure the service.
-
kolla_config- A set of container-specific data that defines configuration file locations, directory permissions, and the command to run on the container to launch the service.
docker_configTasks to run on the configuration container for the service. All tasks are grouped into the following steps to help director perform a staged deployment:
- Step 1 - Load balancer configuration
- Step 2 - Core services (Database, Redis)
- Step 3 - Initial configuration of OpenStack Platform service
- Step 4 - General OpenStack Platform services configuration
- Step 5 - Service activation
host_prep_tasks- Preparation tasks for the bare metal node to accommodate the containerized service.
2.3. Deploying a vendor plugin Copy linkLink copied to clipboard!
To use some third-party hardware as a Block Storage back end, you must deploy a vendor plugin. The following example demonstrates how to deploy a vendor plugin to use Dell EMC hardware as a Block Storage back end.
Procedure
Create a new container images file for your overcloud:
sudo openstack tripleo container image prepare default \ --local-push-destination \ --output-env-file containers-prepare-parameter-dellemc.yaml$ sudo openstack tripleo container image prepare default \ --local-push-destination \ --output-env-file containers-prepare-parameter-dellemc.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Edit the containers-prepare-parameter-dellemc.yaml file.
Add an
excludeparameter to the strategy for the main Red Hat OpenStack Platform container images. Use this parameter to exclude the container image that the vendor container image will replace. In the example, the container image is thecinder-volumeimage:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add a new strategy to the
ContainerImagePrepareparameter that includes the replacement container image for the vendor plugin:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the authentication details for the registry.connect.redhat.com registry to the
ContainerImageRegistryCredentialsparameter:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Save the
containers-prepare-parameter-dellemc.yamlfile. Include the
containers-prepare-parameter-dellemc.yamlfile with any deployment commands, such as asopenstack overcloud deploy:openstack overcloud deploy --templates ... -e containers-prepare-parameter-dellemc.yaml ...$ openstack overcloud deploy --templates ... -e containers-prepare-parameter-dellemc.yaml ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow When director deploys the overcloud, the overcloud uses the vendor container image instead of the standard container image.
- IMPORTANT
-
The
containers-prepare-parameter-dellemc.yamlfile replaces the standardcontainers-prepare-parameter.yamlfile in your overcloud deployment. Do not include the standardcontainers-prepare-parameter.yamlfile in your overcloud deployment. Retain the standardcontainers-prepare-parameter.yamlfile for your undercloud installation and updates.
Chapter 3. Obtaining and modifying container images Copy linkLink copied to clipboard!
A containerized overcloud requires access to a registry with the required container images. This chapter provides information on how to prepare the registry and your undercloud and overcloud configuration to use container images for Red Hat OpenStack Platform.
3.1. Preparing container images Copy linkLink copied to clipboard!
The overcloud installation requires an environment file to determine where to obtain container images and how to store them. Generate and customize this environment file that you can use to prepare your container images.
If you need to configure specific container image versions for your overcloud, you must pin the images to a specific version. For more information, see Pinning container images for the overcloud.
Procedure
-
Log in to your undercloud host as the
stackuser. Generate the default container image preparation file:
openstack tripleo container image prepare default \ --local-push-destination \ --output-env-file containers-prepare-parameter.yaml
$ openstack tripleo container image prepare default \ --local-push-destination \ --output-env-file containers-prepare-parameter.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow This command includes the following additional options:
-
--local-push-destinationsets the registry on the undercloud as the location for container images. This means that director pulls the necessary images from the Red Hat Container Catalog and pushes them to the registry on the undercloud. Director uses this registry as the container image source. To pull directly from the Red Hat Container Catalog, omit this option. --output-env-fileis an environment file name. The contents of this file include the parameters for preparing your container images. In this case, the name of the file iscontainers-prepare-parameter.yaml.NoteYou can use the same
containers-prepare-parameter.yamlfile to define a container image source for both the undercloud and the overcloud.
-
-
Modify the
containers-prepare-parameter.yamlto suit your requirements.
3.2. Container image preparation parameters Copy linkLink copied to clipboard!
The default file for preparing your containers (containers-prepare-parameter.yaml) contains the ContainerImagePrepare heat parameter. This parameter defines a list of strategies for preparing a set of images:
Each strategy accepts a set of sub-parameters that defines which images to use and what to do with the images. The following table contains information about the sub-parameters that you can use with each ContainerImagePrepare strategy:
| Parameter | Description |
|---|---|
|
| List of regular expressions to exclude image names from a strategy. |
|
|
List of regular expressions to include in a strategy. At least one image name must match an existing image. All |
|
|
String to append to the tag for the destination image. For example, if you pull an image with the tag 17.0.0-5.161 and set the |
|
| A dictionary of image labels that filter the images that you want to modify. If an image matches the labels defined, the director includes the image in the modification process. |
|
| String of ansible role names to run during upload but before pushing the image to the destination registry. |
|
|
Dictionary of variables to pass to |
|
| Defines the namespace of the registry that you want to push images to during the upload process.
If you set this parameter to
If the |
|
| The source registry from where to pull the original container images. |
|
|
A dictionary of |
|
|
Use the value of specified container image metadata labels to create a tag for every image and pull that tagged image. For example, if you set |
When you push images to the undercloud, use push_destination: true instead of push_destination: UNDERCLOUD_IP:PORT. The push_destination: true method provides a level of consistency across both IPv4 and IPv6 addresses.
The set parameter accepts a set of key: value definitions:
| Key | Description |
|---|---|
|
| The name of the Ceph Storage container image. |
|
| The namespace of the Ceph Storage container image. |
|
| The tag of the Ceph Storage container image. |
|
| The name, namespace, and tag of the Ceph Storage Alert Manager container image. |
|
| The name, namespace, and tag of the Ceph Storage Grafana container image. |
|
| The name, namespace, and tag of the Ceph Storage Node Exporter container image. |
|
| The name, namespace, and tag of the Ceph Storage Prometheus container image. |
|
| A prefix for each OpenStack service image. |
|
| A suffix for each OpenStack service image. |
|
| The namespace for each OpenStack service image. |
|
|
The driver to use to determine which OpenStack Networking (neutron) container to use. Use a null value to set to the standard |
|
|
Sets a specific tag for all images from the source. If not defined, director uses the Red Hat OpenStack Platform version number as the default value. This parameter takes precedence over the |
The container images use multi-stream tags based on the Red Hat OpenStack Platform version. This means that there is no longer a latest tag.
3.3. Guidelines for container image tagging Copy linkLink copied to clipboard!
The Red Hat Container Registry uses a specific version format to tag all Red Hat OpenStack Platform container images. This format follows the label metadata for each container, which is version-release.
- version
- Corresponds to a major and minor version of Red Hat OpenStack Platform. These versions act as streams that contain one or more releases.
- release
- Corresponds to a release of a specific container image version within a version stream.
For example, if the latest version of Red Hat OpenStack Platform is 17.0.0 and the release for the container image is 5.161, then the resulting tag for the container image is 17.0.0-5.161.
The Red Hat Container Registry also uses a set of major and minor version tags that link to the latest release for that container image version. For example, both 17.0 and 17.0.0 link to the latest release in the 17.0.0 container stream. If a new minor release of 17.0 occurs, the 17.0 tag links to the latest release for the new minor release stream while the 17.0.0 tag continues to link to the latest release within the 17.0.0 stream.
The ContainerImagePrepare parameter contains two sub-parameters that you can use to determine which container image to download. These sub-parameters are the tag parameter within the set dictionary, and the tag_from_label parameter. Use the following guidelines to determine whether to use tag or tag_from_label.
The default value for
tagis the major version for your OpenStack Platform version. For this version it is 17.0. This always corresponds to the latest minor version and release.Copy to Clipboard Copied! Toggle word wrap Toggle overflow To change to a specific minor version for OpenStack Platform container images, set the tag to a minor version. For example, to change to 17.0.2, set
tagto 17.0.2.Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
When you set
tag, director always downloads the latest container imagereleasefor the version set intagduring installation and updates. If you do not set
tag, director uses the value oftag_from_labelin conjunction with the latest major version.Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
tag_from_labelparameter generates the tag from the label metadata of the latest container image release it inspects from the Red Hat Container Registry. For example, the labels for a certain container might use the followingversionandreleasemetadata:"Labels": { "release": "5.161", "version": "17.0.0", ... }"Labels": { "release": "5.161", "version": "17.0.0", ... }Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
The default value for
tag_from_labelis{version}-{release}, which corresponds to the version and release metadata labels for each container image. For example, if a container image has 17.0.0 set forversionand 5.161 set forrelease, the resulting tag for the container image is 17.0.0-5.161. -
The
tagparameter always takes precedence over thetag_from_labelparameter. To usetag_from_label, omit thetagparameter from your container preparation configuration. -
A key difference between
tagandtag_from_labelis that director usestagto pull an image only based on major or minor version tags, which the Red Hat Container Registry links to the latest image release within a version stream, while director usestag_from_labelto perform a metadata inspection of each container image so that director generates a tag and pulls the corresponding image.
3.4. Obtaining container images from private registries Copy linkLink copied to clipboard!
The registry.redhat.io registry requires authentication to access and pull images. To authenticate with registry.redhat.io and other private registries, include the ContainerImageRegistryCredentials and ContainerImageRegistryLogin parameters in your containers-prepare-parameter.yaml file.
ContainerImageRegistryCredentials
Some container image registries require authentication to access images. In this situation, use the ContainerImageRegistryCredentials parameter in your containers-prepare-parameter.yaml environment file. The ContainerImageRegistryCredentials parameter uses a set of keys based on the private registry URL. Each private registry URL uses its own key and value pair to define the username (key) and password (value). This provides a method to specify credentials for multiple private registries.
In the example, replace my_username and my_password with your authentication credentials. Instead of using your individual user credentials, Red Hat recommends creating a registry service account and using those credentials to access registry.redhat.io content.
To specify authentication details for multiple registries, set multiple key-pair values for each registry in ContainerImageRegistryCredentials:
The default ContainerImagePrepare parameter pulls container images from registry.redhat.io, which requires authentication.
For more information, see Red Hat Container Registry Authentication.
ContainerImageRegistryLogin
The ContainerImageRegistryLogin parameter is used to control whether an overcloud node system needs to log in to the remote registry to fetch the container images. This situation occurs when you want the overcloud nodes to pull images directly, rather than use the undercloud to host images.
You must set ContainerImageRegistryLogin to true if push_destination is set to false or not used for a given strategy.
However, if the overcloud nodes do not have network connectivity to the registry hosts defined in ContainerImageRegistryCredentials and you set ContainerImageRegistryLogin to true, the deployment might fail when trying to perform a login. If the overcloud nodes do not have network connectivity to the registry hosts defined in the ContainerImageRegistryCredentials, set push_destination to true and ContainerImageRegistryLogin to false so that the overcloud nodes pull images from the undercloud.
3.5. Layering image preparation entries Copy linkLink copied to clipboard!
The value of the ContainerImagePrepare parameter is a YAML list. This means that you can specify multiple entries.
The following example demonstrates two entries where director uses the latest version of all images except for the nova-api image, which uses the version tagged with 17.0-hotfix:
The includes and excludes parameters use regular expressions to control image filtering for each entry. The images that match the includes strategy take precedence over excludes matches. The image name must match the includes or excludes regular expression value to be considered a match.
3.6. Modifying images during preparation Copy linkLink copied to clipboard!
It is possible to modify images during image preparation, and then immediately deploy the overcloud with modified images.
Red Hat OpenStack Platform (RHOSP) director supports modifying images during preparation for RHOSP containers, not for Ceph containers.
Scenarios for modifying images include:
- As part of a continuous integration pipeline where images are modified with the changes being tested before deployment.
- As part of a development workflow where local changes must be deployed for testing and development.
- When changes must be deployed but are not available through an image build pipeline. For example, adding proprietary add-ons or emergency fixes.
To modify an image during preparation, invoke an Ansible role on each image that you want to modify. The role takes a source image, makes the requested changes, and tags the result. The prepare command can push the image to the destination registry and set the heat parameters to refer to the modified image.
The Ansible role tripleo-modify-image conforms with the required role interface and provides the behaviour necessary for the modify use cases. Control the modification with the modify-specific keys in the ContainerImagePrepare parameter:
-
modify_rolespecifies the Ansible role to invoke for each image to modify. -
modify_append_tagappends a string to the end of the source image tag. This makes it obvious that the resulting image has been modified. Use this parameter to skip modification if thepush_destinationregistry already contains the modified image. Changemodify_append_tagwhenever you modify the image. -
modify_varsis a dictionary of Ansible variables to pass to the role.
To select a use case that the tripleo-modify-image role handles, set the tasks_from variable to the required file in that role.
While developing and testing the ContainerImagePrepare entries that modify images, run the image prepare command without any additional options to confirm that the image is modified as you expect:
sudo openstack tripleo container image prepare \ -e ~/containers-prepare-parameter.yaml
sudo openstack tripleo container image prepare \
-e ~/containers-prepare-parameter.yaml
To use the openstack tripleo container image prepare command, your undercloud must contain a running image-serve registry. As a result, you cannot run this command before a new undercloud installation because the image-serve registry will not be installed. You can run this command after a successful undercloud installation.
3.7. Updating existing packages on container images Copy linkLink copied to clipboard!
Red Hat OpenStack Platform (RHOSP) director supports updating existing packages on container images for RHOSP containers, not for Ceph containers.
Procedure
The following example
ContainerImagePrepareentry updates in all packages on the container images by using the dnf repository configuration of the undercloud host:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.8. Installing additional RPM files to container images Copy linkLink copied to clipboard!
You can install a directory of RPM files in your container images. This is useful for installing hotfixes, local package builds, or any package that is not available through a package repository.
Red Hat OpenStack Platform (RHOSP) director supports installing additional RPM files to container images for RHOSP containers, not for Ceph containers.
Procedure
The following example
ContainerImagePrepareentry installs some hotfix packages on only thenova-computeimage:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.9. Modifying container images with a custom Dockerfile Copy linkLink copied to clipboard!
You can specify a directory that contains a Dockerfile to make the required changes. When you invoke the tripleo-modify-image role, the role generates a Dockerfile.modified file that changes the FROM directive and adds extra LABEL directives.
Red Hat OpenStack Platform (RHOSP) director supports modifying container images with a custom Dockerfile for RHOSP containers, not for Ceph containers.
Procedure
The following example runs the custom Dockerfile on the
nova-computeimage:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following example shows the
/home/stack/nova-custom/Dockerfilefile. After you run anyUSERroot directives, you must switch back to the original image default user:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.10. Preparing a Satellite server for container images Copy linkLink copied to clipboard!
Red Hat Satellite 6 offers registry synchronization capabilities. This provides a method to pull multiple images into a Satellite server and manage them as part of an application life cycle. The Satellite also acts as a registry for other container-enabled systems to use. For more information about managing container images, see Managing Container Images in the Red Hat Satellite 6 Content Management Guide.
The examples in this procedure use the hammer command line tool for Red Hat Satellite 6 and an example organization called ACME. Substitute this organization for your own Satellite 6 organization.
This procedure requires authentication credentials to access container images from registry.redhat.io. Instead of using your individual user credentials, Red Hat recommends creating a registry service account and using those credentials to access registry.redhat.io content. For more information, see "Red Hat Container Registry Authentication".
Procedure
Create a list of all container images:
sudo podman search --limit 1000 "registry.redhat.io/rhosp-rhel9" --format="{{ .Name }}" | sort > satellite_images sudo podman search --limit 1000 "registry.redhat.io/rhceph" | grep rhceph-5-dashboard-rhel8 sudo podman search --limit 1000 "registry.redhat.io/rhceph" | grep rhceph-5-rhel8 sudo podman search --limit 1000 "registry.redhat.io/openshift" | grep ose-prometheus$ sudo podman search --limit 1000 "registry.redhat.io/rhosp-rhel9" --format="{{ .Name }}" | sort > satellite_images $ sudo podman search --limit 1000 "registry.redhat.io/rhceph" | grep rhceph-5-dashboard-rhel8 $ sudo podman search --limit 1000 "registry.redhat.io/rhceph" | grep rhceph-5-rhel8 $ sudo podman search --limit 1000 "registry.redhat.io/openshift" | grep ose-prometheusCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you plan to install Ceph and enable the Ceph Dashboard, you need the following ose-prometheus containers:
registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.6 registry.redhat.io/openshift4/ose-prometheus:v4.6 registry.redhat.io/openshift4/ose-prometheus-alertmanager:v4.6
registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.6 registry.redhat.io/openshift4/ose-prometheus:v4.6 registry.redhat.io/openshift4/ose-prometheus-alertmanager:v4.6Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Copy the
satellite_imagesfile to a system that contains the Satellite 6hammertool. Alternatively, use the instructions in the Hammer CLI Guide to install thehammertool to the undercloud. Run the following
hammercommand to create a new product (OSP Containers) in your Satellite organization:hammer product create \ --organization "ACME" \ --name "OSP Containers"
$ hammer product create \ --organization "ACME" \ --name "OSP Containers"Copy to Clipboard Copied! Toggle word wrap Toggle overflow This custom product will contain your images.
Add the overcloud container images from the
satellite_imagesfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the Ceph Storage container image:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you want to install the Ceph dashboard, include
--name rhceph-5-dashboard-rhel8in thehammer repository createcommand:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Synchronize the container images:
hammer product synchronize \ --organization "ACME" \ --name "OSP Containers"
$ hammer product synchronize \ --organization "ACME" \ --name "OSP Containers"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the Satellite server to complete synchronization.
NoteDepending on your configuration,
hammermight ask for your Satellite server username and password. You can configurehammerto automatically login using a configuration file. For more information, see the Authentication section in the Hammer CLI Guide.-
If your Satellite 6 server uses content views, create a new content view version to incorporate the images and promote it along environments in your application life cycle. This largely depends on how you structure your application lifecycle. For example, if you have an environment called
productionin your lifecycle and you want the container images to be available in that environment, create a content view that includes the container images and promote that content view to theproductionenvironment. For more information, see Managing Content Views. Check the available tags for the
baseimage:hammer docker tag list --repository "base" \ --organization "ACME" \ --lifecycle-environment "production" \ --product "OSP Containers"
$ hammer docker tag list --repository "base" \ --organization "ACME" \ --lifecycle-environment "production" \ --product "OSP Containers"Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command displays tags for the OpenStack Platform container images within a content view for a particular environment.
Return to the undercloud and generate a default environment file that prepares images using your Satellite server as a source. Run the following example command to generate the environment file:
sudo openstack tripleo container image prepare default \ --output-env-file containers-prepare-parameter.yaml
$ sudo openstack tripleo container image prepare default \ --output-env-file containers-prepare-parameter.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
--output-env-fileis an environment file name. The contents of this file include the parameters for preparing your container images for the undercloud. In this case, the name of the file iscontainers-prepare-parameter.yaml.
-
Edit the
containers-prepare-parameter.yamlfile and modify the following parameters:-
push_destination- Set this totrueorfalsedepending on your chosen container image management strategy. If you set this parameter tofalse, the overcloud nodes pull images directly from the Satellite. If you set this parameter totrue, the director pulls the images from the Satellite to the undercloud registry and the overcloud pulls the images from the undercloud registry. -
namespace- The URL of the registry on the Satellite server. name_prefix- The prefix is based on a Satellite 6 convention. This differs depending on whether you use content views:-
If you use content views, the structure is
[org]-[environment]-[content view]-[product]-. For example:acme-production-myosp16-osp_containers-. -
If you do not use content views, the structure is
[org]-[product]-. For example:acme-osp_containers-.
-
If you use content views, the structure is
-
ceph_namespace,ceph_image,ceph_tag- If you use Ceph Storage, include these additional parameters to define the Ceph Storage container image location. Note thatceph_imagenow includes a Satellite-specific prefix. This prefix is the same value as thename_prefixoption.
-
The following example environment file contains Satellite-specific parameters:
To use a specific container image version stored on your Red Hat Satellite Server, set the tag key-value pair to the specific version in the set dictionary. For example, to use the 17.0.2 image stream, set tag: 17.0.2 in the set dictionary.
You must define the containers-prepare-parameter.yaml environment file in the undercloud.conf configuration file, otherwise the undercloud uses the default values:
container_images_file = /home/stack/containers-prepare-parameter.yaml
container_images_file = /home/stack/containers-prepare-parameter.yaml
Chapter 4. Installing the undercloud with containers Copy linkLink copied to clipboard!
This chapter provides info on how to create a container-based undercloud and keep it updated.
4.1. Configuring director Copy linkLink copied to clipboard!
The director installation process requires certain settings in the undercloud.conf configuration file, which director reads from the home directory of the stack user. Complete the following steps to copy default template as a foundation for your configuration.
Procedure
Copy the default template to the home directory of the
stackuser’s:cp \ /usr/share/python-tripleoclient/undercloud.conf.sample \ ~/undercloud.conf
[stack@director ~]$ cp \ /usr/share/python-tripleoclient/undercloud.conf.sample \ ~/undercloud.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Edit the
undercloud.conffile. This file contains settings to configure your undercloud. If you omit or comment out a parameter, the undercloud installation uses the default value.
4.2. Director configuration parameters Copy linkLink copied to clipboard!
The following list contains information about parameters for configuring the undercloud.conf file. Keep all parameters within their relevant sections to avoid errors.
At minimum, you must set the container_images_file parameter to the environment file that contains your container image configuration. Without this parameter properly set to the appropriate file, director cannot obtain your container image rule set from the ContainerImagePrepare parameter nor your container registry authentication details from the ContainerImageRegistryCredentials parameter.
Defaults
The following parameters are defined in the [DEFAULT] section of the undercloud.conf file:
- additional_architectures
-
A list of additional (kernel) architectures that an overcloud supports. Currently the overcloud supports only the
x86_64architecture. - certificate_generation_ca
-
The
certmongernickname of the CA that signs the requested certificate. Use this option only if you have set thegenerate_service_certificateparameter. If you select thelocalCA, certmonger extracts the local CA certificate to/etc/pki/ca-trust/source/anchors/cm-local-ca.pemand adds the certificate to the trust chain. - clean_nodes
- Defines whether to wipe the hard drive between deployments and after introspection.
- cleanup
-
Cleanup temporary files. Set this to
Falseto leave the temporary files used during deployment in place after you run the deployment command. This is useful for debugging the generated files or if errors occur. - container_cli
-
The CLI tool for container management. Leave this parameter set to
podman. Red Hat Enterprise Linux 9.0 only supportspodman. - container_healthcheck_disabled
-
Disables containerized service health checks. Red Hat recommends that you enable health checks and leave this option set to
false. - container_images_file
Heat environment file with container image information. This file can contain the following entries:
- Parameters for all required container images
-
The
ContainerImagePrepareparameter to drive the required image preparation. Usually the file that contains this parameter is namedcontainers-prepare-parameter.yaml.
- container_insecure_registries
-
A list of insecure registries for
podmanto use. Use this parameter if you want to pull images from another source, such as a private container registry. In most cases,podmanhas the certificates to pull container images from either the Red Hat Container Catalog or from your Satellite Server if the undercloud is registered to Satellite. - container_registry_mirror
-
An optional
registry-mirrorconfigured thatpodmanuses. - custom_env_files
- Additional environment files that you want to add to the undercloud installation.
- deployment_user
-
The user who installs the undercloud. Leave this parameter unset to use the current default user
stack. - discovery_default_driver
-
Sets the default driver for automatically enrolled nodes. Requires the
enable_node_discoveryparameter to be enabled and you must include the driver in theenabled_hardware_typeslist. - enable_ironic; enable_ironic_inspector; enable_tempest; enable_validations
-
Defines the core services that you want to enable for director. Leave these parameters set to
true. - enable_node_discovery
-
Automatically enroll any unknown node that PXE-boots the introspection ramdisk. New nodes use the
fakedriver as a default but you can setdiscovery_default_driverto override. You can also use introspection rules to specify driver information for newly enrolled nodes. - enable_routed_networks
- Defines whether to enable support for routed control plane networks.
- enabled_hardware_types
- A list of hardware types that you want to enable for the undercloud.
- generate_service_certificate
-
Defines whether to generate an SSL/TLS certificate during the undercloud installation, which is used for the
undercloud_service_certificateparameter. The undercloud installation saves the resulting certificate/etc/pki/tls/certs/undercloud-[undercloud_public_vip].pem. The CA defined in thecertificate_generation_caparameter signs this certificate. - heat_container_image
- URL for the heat container image to use. Leave unset.
- heat_native
-
Run host-based undercloud configuration using
heat-all. Leave astrue. - hieradata_override
-
Path to
hieradataoverride file that configures Puppet hieradata on the director, providing custom configuration to services beyond theundercloud.confparameters. If set, the undercloud installation copies this file to the/etc/puppet/hieradatadirectory and sets it as the first file in the hierarchy. For more information about using this feature, see Configuring hieradata on the undercloud. - inspection_extras
-
Defines whether to enable extra hardware collection during the inspection process. This parameter requires the
python-hardwareorpython-hardware-detectpackages on the introspection image. - inspection_interface
-
The bridge that director uses for node introspection. This is a custom bridge that the director configuration creates. The
LOCAL_INTERFACEattaches to this bridge. Leave this as the defaultbr-ctlplane. - inspection_runbench
-
Runs a set of benchmarks during node introspection. Set this parameter to
trueto enable the benchmarks. This option is necessary if you intend to perform benchmark analysis when inspecting the hardware of registered nodes. - ipv6_address_mode
IPv6 address configuration mode for the undercloud provisioning network. The following list contains the possible values for this parameter:
- dhcpv6-stateless - Address configuration using router advertisement (RA) and optional information using DHCPv6.
- dhcpv6-stateful - Address configuration and optional information using DHCPv6.
- ipxe_enabled
-
Defines whether to use iPXE or standard PXE. The default is
true, which enables iPXE. Set this parameter tofalseto use standard PXE. For PowerPC deployments, or for hybrid PowerPC and x86 deployments, set this value tofalse. - local_interface
The chosen interface for the director Provisioning NIC. This is also the device that director uses for DHCP and PXE boot services. Change this value to your chosen device. To see which device is connected, use the
ip addrcommand. For example, this is the result of anip addrcommand:Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, the External NIC uses
em0and the Provisioning NIC usesem1, which is currently not configured. In this case, set thelocal_interfacetoem1. The configuration script attaches this interface to a custom bridge defined with theinspection_interfaceparameter.- local_ip
The IP address defined for the director Provisioning NIC. This is also the IP address that director uses for DHCP and PXE boot services. Leave this value as the default
192.168.24.1/24unless you use a different subnet for the Provisioning network, for example, if this IP address conflicts with an existing IP address or subnet in your environment.For IPv6, the local IP address prefix length must be
/64to support both stateful and stateless connections.- local_mtu
-
The maximum transmission unit (MTU) that you want to use for the
local_interface. Do not exceed 1500 for the undercloud. - local_subnet
-
The local subnet that you want to use for PXE boot and DHCP interfaces. The
local_ipaddress should reside in this subnet. The default isctlplane-subnet. - net_config_override
-
Path to network configuration override template. If you set this parameter, the undercloud uses a JSON or YAML format template to configure the networking with
os-net-configand ignores the network parameters set inundercloud.conf. Use this parameter when you want to configure bonding or add an option to the interface. For more information about customizing undercloud network interfaces, see Configuring undercloud network interfaces. - networks_file
-
Networks file to override for
heat. - output_dir
- Directory to output state, processed heat templates, and Ansible deployment files.
- overcloud_domain_name
The DNS domain name that you want to use when you deploy the overcloud.
NoteWhen you configure the overcloud, you must set the
CloudDomainparameter to a matching value. Set this parameter in an environment file when you configure your overcloud.- roles_file
- The roles file that you want to use to override the default roles file for undercloud installation. It is highly recommended to leave this parameter unset so that the director installation uses the default roles file.
- scheduler_max_attempts
- The maximum number of times that the scheduler attempts to deploy an instance. This value must be greater or equal to the number of bare metal nodes that you expect to deploy at once to avoid potential race conditions when scheduling.
- service_principal
- The Kerberos principal for the service using the certificate. Use this parameter only if your CA requires a Kerberos principal, such as in FreeIPA.
- subnets
-
List of routed network subnets for provisioning and introspection. The default value includes only the
ctlplane-subnetsubnet. For more information, see Subnets. - templates
- Heat templates file to override.
- undercloud_admin_host
The IP address or hostname defined for director admin API endpoints over SSL/TLS. The director configuration attaches the IP address to the director software bridge as a routed IP address, which uses the
/32netmask.If the
undercloud_admin_hostis not in the same IP network as thelocal_ip, you must configure the interface on which you want the admin APIs on the undercloud to listen. By default, the admin APIs listen on thebr-ctlplaneinterface. For information about how to configure undercloud network interfaces, see Configuring undercloud network interfaces.- undercloud_debug
-
Sets the log level of undercloud services to
DEBUG. Set this value totrueto enableDEBUGlog level. - undercloud_enable_selinux
-
Enable or disable SELinux during the deployment. It is highly recommended to leave this value set to
trueunless you are debugging an issue. - undercloud_hostname
- Defines the fully qualified host name for the undercloud. If set, the undercloud installation configures all system host name settings. If left unset, the undercloud uses the current host name, but you must configure all system host name settings appropriately.
- undercloud_log_file
-
The path to a log file to store the undercloud install and upgrade logs. By default, the log file is
install-undercloud.login the home directory. For example,/home/stack/install-undercloud.log. - undercloud_nameservers
- A list of DNS nameservers to use for the undercloud hostname resolution.
- undercloud_ntp_servers
- A list of network time protocol servers to help synchronize the undercloud date and time.
- undercloud_public_host
The IP address or hostname defined for director public API endpoints over SSL/TLS. The director configuration attaches the IP address to the director software bridge as a routed IP address, which uses the
/32netmask.If the
undercloud_public_hostis not in the same IP network as thelocal_ip, you must set thePublicVirtualInterfaceparameter to the public-facing interface on which you want the public APIs on the undercloud to listen. By default, the public APIs listen on thebr-ctlplaneinterface. Set thePublicVirtualInterfaceparameter in a custom environment file, and include the custom environment file in theundercloud.conffile by configuring thecustom_env_filesparameter.For information about customizing undercloud network interfaces, see Configuring undercloud network interfaces.
- undercloud_service_certificate
- The location and filename of the certificate for OpenStack SSL/TLS communication. Ideally, you obtain this certificate from a trusted certificate authority. Otherwise, generate your own self-signed certificate.
- undercloud_timezone
- Host timezone for the undercloud. If you do not specify a timezone, director uses the existing timezone configuration.
- undercloud_update_packages
- Defines whether to update packages during the undercloud installation.
Subnets
Each provisioning subnet is a named section in the undercloud.conf file. For example, to create a subnet called ctlplane-subnet, use the following sample in your undercloud.conf file:
You can specify as many provisioning networks as necessary to suit your environment.
Director cannot change the IP addresses for a subnet after director creates the subnet.
- cidr
-
The network that director uses to manage overcloud instances. This is the Provisioning network, which the undercloud
neutronservice manages. Leave this as the default192.168.24.0/24unless you use a different subnet for the Provisioning network. - masquerade
Defines whether to masquerade the network defined in the
cidrfor external access. This provides the Provisioning network with a degree of network address translation (NAT) so that the Provisioning network has external access through director.NoteThe director configuration also enables IP forwarding automatically using the relevant
sysctlkernel parameter.- dhcp_start; dhcp_end
The start and end of the DHCP allocation range for overcloud nodes. Ensure that this range contains enough IP addresses to allocate to your nodes. If not specified for the subnet, director determines the allocation pools by removing the values set for the
local_ip,gateway,undercloud_admin_host,undercloud_public_host, andinspection_iprangeparameters from the subnets full IP range.You can configure non-contiguous allocation pools for undercloud control plane subnets by specifying a list of start and end address pairs. Alternatively, you can use the
dhcp_excludeoption to exclude IP addresses within an IP address range. For example, the following configurations both create allocation pools172.20.0.100-172.20.0.150and172.20.0.200-172.20.0.250:Option 1
dhcp_start = 172.20.0.100,172.20.0.200 dhcp_end = 172.20.0.150,172.20.0.250
dhcp_start = 172.20.0.100,172.20.0.200 dhcp_end = 172.20.0.150,172.20.0.250Copy to Clipboard Copied! Toggle word wrap Toggle overflow Option 2
dhcp_start = 172.20.0.100 dhcp_end = 172.20.0.250 dhcp_exclude = 172.20.0.151-172.20.0.199
dhcp_start = 172.20.0.100 dhcp_end = 172.20.0.250 dhcp_exclude = 172.20.0.151-172.20.0.199Copy to Clipboard Copied! Toggle word wrap Toggle overflow - dhcp_exclude
IP addresses to exclude in the DHCP allocation range. For example, the following configuration excludes the IP address
172.20.0.105and the IP address range172.20.0.210-172.20.0.219:dhcp_exclude = 172.20.0.105,172.20.0.210-172.20.0.219
dhcp_exclude = 172.20.0.105,172.20.0.210-172.20.0.219Copy to Clipboard Copied! Toggle word wrap Toggle overflow - dns_nameservers
-
DNS nameservers specific to the subnet. If no nameservers are defined for the subnet, the subnet uses nameservers defined in the
undercloud_nameserversparameter. - gateway
-
The gateway for the overcloud instances. This is the undercloud host, which forwards traffic to the External network. Leave this as the default
192.168.24.1unless you use a different IP address for director or want to use an external gateway directly. - host_routes
-
Host routes for the Neutron-managed subnet for the overcloud instances on this network. This also configures the host routes for the
local_subneton the undercloud. - inspection_iprange
-
Temporary IP range for nodes on this network to use during the inspection process. This range must not overlap with the range defined by
dhcp_startanddhcp_endbut must be in the same IP subnet.
Modify the values for these parameters to suit your configuration. When complete, save the file.
4.3. Installing director Copy linkLink copied to clipboard!
Complete the following steps to install director and perform some basic post-installation tasks.
Procedure
Run the following command to install director on the undercloud:
openstack undercloud install
[stack@director ~]$ openstack undercloud installCopy to Clipboard Copied! Toggle word wrap Toggle overflow This command launches the director configuration script. Director installs additional packages and configures its services according to the configuration in the
undercloud.conf. This script takes several minutes to complete.The script generates two files:
-
/home/stack/tripleo-deploy/undercloud/tripleo-undercloud-passwords.yaml- A list of all passwords for the director services. -
/home/stack/stackrc- A set of initialization variables to help you access the director command line tools.
-
The script also starts all OpenStack Platform service containers automatically. You can check the enabled containers with the following command:
sudo podman ps
[stack@director ~]$ sudo podman psCopy to Clipboard Copied! Toggle word wrap Toggle overflow To initialize the
stackuser to use the command line tools, run the following command:source ~/stackrc
[stack@director ~]$ source ~/stackrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow The prompt now indicates that OpenStack commands authenticate and execute against the undercloud;
(undercloud) [stack@director ~]$Copy to Clipboard Copied! Toggle word wrap Toggle overflow
The director installation is complete. You can now use the director command line tools.
4.4. Performing a minor update of a containerized undercloud Copy linkLink copied to clipboard!
Director provides commands to update the main packages on the undercloud node. Use director to perform a minor update within the current version of your RHOSP environment.
Procedure
-
Log in to the undercloud host as the
stackuser. Source the
stackrcundercloud credentials file:source ~/stackrc
$ source ~/stackrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the director main packages with the
dnf updatecommand:sudo dnf update -y python3-tripleoclient ansible-*
$ sudo dnf update -y python3-tripleoclient ansible-*Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the undercloud environment:
openstack undercloud upgrade
$ openstack undercloud upgradeCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until the undercloud update process completes.
Reboot the undercloud to update the operating system’s kernel and other system packages:
sudo reboot
$ sudo rebootCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until the node boots.
Chapter 5. Deploying and updating an overcloud with containers Copy linkLink copied to clipboard!
This chapter provides info on how to create a container-based overcloud and keep it updated.
5.1. Deploying an overcloud Copy linkLink copied to clipboard!
This procedure demonstrates how to deploy an overcloud with minimum configuration. The result will be a basic two-node overcloud (1 Controller node, 1 Compute node).
Procedure
Source the
stackrcfile:source ~/stackrc
$ source ~/stackrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
deploycommand and include the file containing your overcloud image locations (usuallyovercloud_images.yaml):(undercloud) $ openstack overcloud deploy --templates \ -e /home/stack/templates/overcloud_images.yaml \ --ntp-server pool.ntp.org
(undercloud) $ openstack overcloud deploy --templates \ -e /home/stack/templates/overcloud_images.yaml \ --ntp-server pool.ntp.orgCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until the overcloud completes deployment.
5.2. Updating an overcloud Copy linkLink copied to clipboard!
For information on updating a containerized overcloud, see the Keeping Red Hat OpenStack Platform Updated guide.
Chapter 6. Working with containerized services Copy linkLink copied to clipboard!
This chapter provides some examples of commands to manage containers and how to troubleshoot your OpenStack Platform containers
6.1. Managing containerized services Copy linkLink copied to clipboard!
Red Hat OpenStack Platform (RHOSP) runs services in containers on the undercloud and overcloud nodes. In certain situations, you might need to control the individual services on a host. This section contains information about some common commands you can run on a node to manage containerized services.
Listing containers and images
To list running containers, run the following command:
sudo podman ps
$ sudo podman ps
To include stopped or failed containers in the command output, add the --all option to the command:
sudo podman ps --all
$ sudo podman ps --all
To list container images, run the following command:
sudo podman images
$ sudo podman images
Inspecting container properties
To view the properties of a container or container images, use the podman inspect command. For example, to inspect the keystone container, run the following command:
sudo podman inspect keystone
$ sudo podman inspect keystone
Managing containers with Systemd services
Previous versions of OpenStack Platform managed containers with Docker and its daemon. Now, the Systemd services interface manages the lifecycle of the containers. Each container is a service and you run Systemd commands to perform specific operations for each container.
It is not recommended to use the Podman CLI to stop, start, and restart containers because Systemd applies a restart policy. Use Systemd service commands instead.
To check a container status, run the systemctl status command:
To stop a container, run the systemctl stop command:
sudo systemctl stop tripleo_keystone
$ sudo systemctl stop tripleo_keystone
To start a container, run the systemctl start command:
sudo systemctl start tripleo_keystone
$ sudo systemctl start tripleo_keystone
To restart a container, run the systemctl restart command:
sudo systemctl restart tripleo_keystone
$ sudo systemctl restart tripleo_keystone
Because no daemon monitors the containers status, Systemd automatically restarts most containers in these situations:
-
Clean exit code or signal, such as running
podman stopcommand. - Unclean exit code, such as the podman container crashing after a start.
- Unclean signals.
- Timeout if the container takes more than 1m 30s to start.
For more information about Systemd services, see the systemd.service documentation.
Any changes to the service configuration files within the container revert after restarting the container. This is because the container regenerates the service configuration based on files on the local file system of the node in /var/lib/config-data/puppet-generated/. For example, if you edit /etc/keystone/keystone.conf within the keystone container and restart the container, the container regenerates the configuration using /var/lib/config-data/puppet-generated/keystone/etc/keystone/keystone.conf on the local file system of the node, which overwrites any the changes that were made within the container before the restart.
Monitoring podman containers with Systemd timers
The Systemd timers interface manages container health checks. Each container has a timer that runs a service unit that executes health check scripts.
To list all OpenStack Platform containers timers, run the systemctl list-timers command and limit the output to lines containing tripleo:
sudo systemctl list-timers | grep tripleo Mon 2019-02-18 20:18:30 UTC 1s left Mon 2019-02-18 20:17:26 UTC 1min 2s ago tripleo_nova_metadata_healthcheck.timer tripleo_nova_metadata_healthcheck.service Mon 2019-02-18 20:18:34 UTC 5s left Mon 2019-02-18 20:17:23 UTC 1min 5s ago tripleo_keystone_healthcheck.timer tripleo_keystone_healthcheck.service Mon 2019-02-18 20:18:35 UTC 6s left Mon 2019-02-18 20:17:13 UTC 1min 15s ago tripleo_memcached_healthcheck.timer tripleo_memcached_healthcheck.service (...)
$ sudo systemctl list-timers | grep tripleo
Mon 2019-02-18 20:18:30 UTC 1s left Mon 2019-02-18 20:17:26 UTC 1min 2s ago tripleo_nova_metadata_healthcheck.timer tripleo_nova_metadata_healthcheck.service
Mon 2019-02-18 20:18:34 UTC 5s left Mon 2019-02-18 20:17:23 UTC 1min 5s ago tripleo_keystone_healthcheck.timer tripleo_keystone_healthcheck.service
Mon 2019-02-18 20:18:35 UTC 6s left Mon 2019-02-18 20:17:13 UTC 1min 15s ago tripleo_memcached_healthcheck.timer tripleo_memcached_healthcheck.service
(...)
To check the status of a specific container timer, run the systemctl status command for the healthcheck service:
To stop, start, restart, and show the status of a container timer, run the relevant systemctl command against the .timer Systemd resource. For example, to check the status of the tripleo_keystone_healthcheck.timer resource, run the following command:
sudo systemctl status tripleo_keystone_healthcheck.timer ● tripleo_keystone_healthcheck.timer - keystone container healthcheck Loaded: loaded (/etc/systemd/system/tripleo_keystone_healthcheck.timer; enabled; vendor preset: disabled) Active: active (waiting) since Fri 2019-02-15 23:53:18 UTC; 2 days ago
$ sudo systemctl status tripleo_keystone_healthcheck.timer
● tripleo_keystone_healthcheck.timer - keystone container healthcheck
Loaded: loaded (/etc/systemd/system/tripleo_keystone_healthcheck.timer; enabled; vendor preset: disabled)
Active: active (waiting) since Fri 2019-02-15 23:53:18 UTC; 2 days ago
If the healthcheck service is disabled but the timer for that service is present and enabled, it means that the check is currently timed out, but will be run according to timer. You can also start the check manually.
The podman ps command does not show the container health status.
Checking container logs
Red Hat OpenStack Platform 17.0 logs all standard output (stdout) from all containers, and standard errors (stderr) consolidated inone single file for each container in /var/log/containers/stdout.
The host also applies log rotation to this directory, which prevents huge files and disk space issues.
In case a container is replaced, the new container outputs to the same log file, because podman uses the container name instead of container ID.
You can also check the logs for a containerized service with the podman logs command. For example, to view the logs for the keystone container, run the following command:
sudo podman logs keystone
$ sudo podman logs keystone
Accessing containers
To enter the shell for a containerized service, use the podman exec command to launch /bin/bash. For example, to enter the shell for the keystone container, run the following command:
sudo podman exec -it keystone /bin/bash
$ sudo podman exec -it keystone /bin/bash
To enter the shell for the keystone container as the root user, run the following command:
sudo podman exec --user 0 -it <NAME OR ID> /bin/bash
$ sudo podman exec --user 0 -it <NAME OR ID> /bin/bash
To exit the container, run the following command:
exit
# exit
6.2. Troubleshooting containerized services Copy linkLink copied to clipboard!
If a containerized service fails during or after overcloud deployment, use the following recommendations to determine the root cause for the failure:
Before running these commands, check that you are logged into an overcloud node and not running these commands on the undercloud.
Checking the container logs
Each container retains standard output from its main process. This output acts as a log to help determine what actually occurs during a container run. For example, to view the log for the keystone container, use the following command:
sudo podman logs keystone
$ sudo podman logs keystone
In most cases, this log provides the cause of a container’s failure.
Inspecting the container
In some situations, you might need to verify information about a container. For example, use the following command to view keystone container data:
sudo podman inspect keystone
$ sudo podman inspect keystone
This provides a JSON object containing low-level configuration data. You can pipe the output to the jq command to parse specific data. For example, to view the container mounts for the keystone container, run the following command:
sudo podman inspect keystone | jq .[0].Mounts
$ sudo podman inspect keystone | jq .[0].Mounts
You can also use the --format option to parse data to a single line, which is useful for running commands against sets of container data. For example, to recreate the options used to run the keystone container, use the following inspect command with the --format option:
sudo podman inspect --format='{{range .Config.Env}} -e "{{.}}" {{end}} {{range .Mounts}} -v {{.Source}}:{{.Destination}}{{if .Mode}}:{{.Mode}}{{end}}{{end}} -ti {{.Config.Image}}' keystone
$ sudo podman inspect --format='{{range .Config.Env}} -e "{{.}}" {{end}} {{range .Mounts}} -v {{.Source}}:{{.Destination}}{{if .Mode}}:{{.Mode}}{{end}}{{end}} -ti {{.Config.Image}}' keystone
The --format option uses Go syntax to create queries.
Use these options in conjunction with the podman run command to recreate the container for troubleshooting purposes:
OPTIONS=$( sudo podman inspect --format='{{range .Config.Env}} -e "{{.}}" {{end}} {{range .Mounts}} -v {{.Source}}:{{.Destination}}{{if .Mode}}:{{.Mode}}{{end}}{{end}} -ti {{.Config.Image}}' keystone )
sudo podman run --rm $OPTIONS /bin/bash
$ OPTIONS=$( sudo podman inspect --format='{{range .Config.Env}} -e "{{.}}" {{end}} {{range .Mounts}} -v {{.Source}}:{{.Destination}}{{if .Mode}}:{{.Mode}}{{end}}{{end}} -ti {{.Config.Image}}' keystone )
$ sudo podman run --rm $OPTIONS /bin/bash
Running commands in the container
In some cases, you might need to obtain information from within a container through a specific Bash command. In this situation, use the following podman command to execute commands within a running container. For example, to run a command in the keystone container:
sudo podman exec -ti keystone <COMMAND>
$ sudo podman exec -ti keystone <COMMAND>
The -ti options run the command through an interactive pseudoterminal.
Replace <COMMAND> with your desired command. For example, each container has a health check script to verify the service connection. You can run the health check script for keystone with the following command:
sudo podman exec -ti keystone /openstack/healthcheck
$ sudo podman exec -ti keystone /openstack/healthcheck
To access the container’s shell, run podman exec using /bin/bash as the command:
sudo podman exec -ti keystone /bin/bash
$ sudo podman exec -ti keystone /bin/bash
Exporting a container
When a container fails, you might need to investigate the full contents of the file. In this case, you can export the full file system of a container as a tar archive. For example, to export the keystone container’s file system, run the following command:
sudo podman export keystone -o keystone.tar
$ sudo podman export keystone -o keystone.tar
This command create the keystone.tar archive, which you can extract and explore.
Chapter 7. Comparing Systemd services to containerized services Copy linkLink copied to clipboard!
This chapter provides reference material to show how containerized services differ from Systemd services.
7.1. Systemd services and containerized services Copy linkLink copied to clipboard!
The following table shows the correlation between Systemd-based services and the podman containers controlled with the Systemd services.
| Component | Systemd service | Containers |
|---|---|---|
| OpenStack Image Storage (glance) |
|
|
| HAProxy |
|
|
| OpenStack Orchestration (heat) |
|
|
| OpenStack Bare Metal (ironic) |
|
|
| Keepalived |
|
|
| OpenStack Identity (keystone) |
|
|
| Logrotate |
|
|
| Memcached |
|
|
| MySQL |
|
|
| OpenStack Networking (neutron) |
|
|
| OpenStack Compute (nova) |
|
|
| RabbitMQ |
|
|
| OpenStack Object Storage (swift) |
|
|
| OpenStack Messaging (zaqar) |
|
|
| Aodh |
|
|
| Gnocchi |
|
|
| Ceilometer |
|
|
7.2. Systemd log locations vs containerized log locations Copy linkLink copied to clipboard!
The following table shows Systemd-based OpenStack logs and their equivalents for containers. All container-based log locations are available on the physical host and are mounted to the container.
| OpenStack service | Systemd service logs | Container logs |
|---|---|---|
| aodh |
|
|
| ceilometer |
|
|
| cinder |
|
|
| glance |
|
|
| gnocchi |
|
|
| heat |
|
|
| horizon |
|
|
| keystone |
|
|
| databases |
|
|
| neutron |
|
|
| nova |
|
|
| rabbitmq |
|
|
| redis |
|
|
| swift |
|
|
7.3. Systemd configuration vs containerized configuration Copy linkLink copied to clipboard!
The following table shows Systemd-based OpenStack configuration and their equivalents for containers. All container-based configuration locations are available on the physical host, are mounted to the container, and are merged (via kolla) into the configuration within each respective container.
| OpenStack service | Systemd service configuration | Container configuration |
|---|---|---|
| aodh |
|
|
| ceilometer |
|
|
| cinder |
|
|
| glance |
|
|
| gnocchi |
|
|
| haproxy |
|
|
| heat |
|
|
| horizon |
|
|
| keystone |
|
|
| databases |
|
|
| neutron |
|
|
| nova |
|
|
| rabbitmq |
|
|
| redis |
|
|
| swift |
|
|