Transitioning to Containerized Services
A basic guide to working with OpenStack Platform containerized services
Abstract
Chapter 1. Introduction Copy linkLink copied to clipboard!
Past versions of Red Hat OpenStack Platform used services managed with Systemd. However, more recent version of OpenStack Platform now use containers to run services. Some administrators might not have a good understanding of how containerized OpenStack Platform services operate, and so this guide aims to help you understand OpenStack Platform container images and containerized services. This includes:
- How to obtain and modify container images
- How to manage containerized services in the overcloud
- Understanding how containers differ from Systemd services
The main goal is to help you gain enough knowledge of containerized OpenStack Platform services to transition from a Systemd-based environment to a container-based environment.
1.1. Containerized Services and Kolla Copy linkLink copied to clipboard!
Each of the main Red Hat OpenStack Platform services run in containers. This provides a method of keep each service within its own isolated namespace separated from the host. This means:
- The deployment of services is performed by pulling container images from the Red Hat Custom Portal and running them.
-
The management functions, like starting and stopping services, operate through the
dockercommand. - Upgrading containers require pulling new container images and replacing the existing containers with newer versions.
Red Hat OpenStack Platform uses a set of containers built and managed with the kolla toolset.
Chapter 2. Obtaining and modifying container images Copy linkLink copied to clipboard!
A containerized overcloud requires access to a registry with the required container images. This chapter provides information on how to prepare the registry and your undercloud and overcloud configuration to use container images for Red Hat OpenStack Platform.
2.1. Preparing container images Copy linkLink copied to clipboard!
The overcloud configuration requires initial registry configuration to determine where to obtain images and how to store them. Complete the following steps to generate and customize an environment file for preparing your container images.
Procedure
- Log in to your undercloud host as the stack user.
Generate the default container image preparation file:
openstack tripleo container image prepare default \ --local-push-destination \ --output-env-file containers-prepare-parameter.yaml
$ openstack tripleo container image prepare default \ --local-push-destination \ --output-env-file containers-prepare-parameter.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow This command includes the following additional options:
-
--local-push-destinationsets the registry on the undercloud as the location for container images. This means the director pulls the necessary images from the Red Hat Container Catalog and pushes them to the registry on the undercloud. The director uses this registry as the container image source. To pull directly from the Red Hat Container Catalog, omit this option. --output-env-fileis an environment file name. The contents of this file include the parameters for preparing your container images. In this case, the name of the file iscontainers-prepare-parameter.yaml.NoteYou can also use the same
containers-prepare-parameter.yamlfile to define a container image source for both the undercloud and the overcloud.
-
-
Edit the
containers-prepare-parameter.yamland make the modifications to suit your requirements.
2.2. Container image preparation parameters Copy linkLink copied to clipboard!
The default file for preparing your containers (containers-prepare-parameter.yaml) contains the ContainerImagePrepare Heat parameter. This parameter defines a list of strategies for preparing a set of images:
Each strategy accepts a set of sub-parameters that define which images to use and what to do with them. The following table contains information about the sub-parameters you can use with each ContainerImagePrepare strategy:
| Parameter | Description |
|---|---|
|
| List of image name substrings to exclude from a strategy. |
|
|
List of image name substrings to include in a strategy. At least one image name must match an existing image. All |
|
|
String to append to the tag for the destination image. For example, if you pull an image with the tag |
|
| A dictionary of image labels that filter the images to modify. If an image matches the labels defined, the director includes the image in the modification process. |
|
| String of ansible role names to run during upload but before pushing the image to the destination registry. |
|
|
Dictionary of variables to pass to |
|
|
The namespace of the registry to push images during the upload process. When you specify a namespace for this parameter, all image parameters use this namespace too. If set to |
|
| The source registry from where to pull the original container images. |
|
|
A dictionary of |
|
|
Defines the label pattern to tag the resulting images. Usually sets to |
The set parameter accepts a set of key: value definitions. The following table contains information about the keys:
| Key | Description |
|---|---|
|
| The name of the Ceph Storage container image. |
|
| The namespace of the Ceph Storage container image. |
|
| The tag of the Ceph Storage container image. |
|
| A prefix for each OpenStack service image. |
|
| A suffix for each OpenStack service image. |
|
| The namespace for each OpenStack service image. |
|
|
The driver to use to determine which OpenStack Networking (neutron) container to use. Use a null value to set to the standard |
|
|
The tag that the director uses to identify the images to pull from the source registry. You usually keep this key set to |
The set section might contains several parameters that begin with openshift_. These parameters are for various scenarios involving OpenShift-on-OpenStack.
2.3. Layering image preparation entries Copy linkLink copied to clipboard!
The value of the ContainerImagePrepare parameter is a YAML list. This means you can specify multiple entries. The following example demonstrates two entries where the director uses the latest version of all images except for the nova-api image, which uses the version tagged with 14.0-44:
The includes and excludes entries control image filtering for each entry. The images that match the includes strategy take precedence over excludes matches. The image name must include the includes or excludes value to be considered a match.
2.4. Modifying images during preparation Copy linkLink copied to clipboard!
It is possible to modify images during image preparation, then immediately deploy with modified images. Scenarios for modifying images include:
- As part of a continuous integration pipeline where images are modified with the changes being tested before deployment.
- As part of a development workflow where local changes need to be deployed for testing and development.
- When changes need to be deployed but are not available through an image build pipeline. For example, adding proprietry add-ons or emergency fixes.
To modify an image during preparation, invoke an Ansible role on each image that you want to modify. The role takes a source image, makes the requested changes, and tags the result. The prepare command can push the image to the destination registry and set the Heat parameters to refer to the modified image.
The Ansible role tripleo-modify-image conforms with the required role interface, and provides the behaviour necessary for the modify use-cases. Modification is controlled using modify-specific keys in the ContainerImagePrepare parameter:
-
modify_rolespecifies the Ansible role to invoke for each image to modify. -
modify_append_tagappends a string to the end of the source image tag. This makes it obvious that the resulting image has been modified. Use this parameter to skip modification if thepush_destinationregistry already contains the modified image. It is recommended to changemodify_append_tagwhenever you modify the image. -
modify_varsis a dictionary of Ansible variables to pass to the role.
To select a use-case that the tripleo-modify-image role handles, set the tasks_from variable to the required file in that role.
While developing and testing the ContainerImagePrepare entries that modify images, it is recommended to run the image prepare command without any additional options to confirm the image is modified as expected:
sudo openstack tripleo container image prepare \ -e ~/containers-prepare-parameter.yaml
sudo openstack tripleo container image prepare \
-e ~/containers-prepare-parameter.yaml
2.5. Updating existing packages on container images Copy linkLink copied to clipboard!
The following example ContainerImagePrepare entry updates in all packages on the images using the undercloud host’s yum repository configuration:
2.6. Installing additional RPM files to container images Copy linkLink copied to clipboard!
You can install a directory of RPM files in your container images. This is useful for installing hotfixes, local package builds, or any package not available through a package repository. For example, the following ContainerImagePrepare entry installs some hotfix packages only on the nova-compute image:
2.7. Modifying container images with a custom Dockerfile Copy linkLink copied to clipboard!
For maximum flexibility, you can specify a directory containing a Dockerfile to make the required changes. When you invoke the tripleo-modify-image role, the role generates a Dockerfile.modified file that changes the FROM directive and adds extra LABEL directives. The following example runs the custom Dockerfile on the nova-compute image:
An example /home/stack/nova-custom/Dockerfile` follows. After running any USER root directives, you must switch back to the original image default user:
2.8. Preparing a Satellite server for container images Copy linkLink copied to clipboard!
Red Hat Satellite 6 offers registry synchronization capabilities. This provides a method to pull multiple images into a Satellite server and manage them as part of an application life cycle. The Satellite also acts as a registry for other container-enabled systems to use. For more details information on managing container images, see "Managing Container Images" in the Red Hat Satellite 6 Content Management Guide.
The examples in this procedure use the hammer command line tool for Red Hat Satellite 6 and an example organization called ACME. Substitute this organization for your own Satellite 6 organization.
Procedure
Create a list of all container images, including the Ceph images:
sudo docker search "registry.access.redhat.com/rhosp14" | awk '{ print $2 }' | grep -v beta | sed "s/registry.access.redhat.com\///g" | tail -n+2 > satellite_images echo "rhceph/rhceph-3-rhel7" >> satellite_images_names$ sudo docker search "registry.access.redhat.com/rhosp14" | awk '{ print $2 }' | grep -v beta | sed "s/registry.access.redhat.com\///g" | tail -n+2 > satellite_images $ echo "rhceph/rhceph-3-rhel7" >> satellite_images_namesCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Copy the
satellite_images_namesfile to a system that contains the Satellite 6hammertool. Alternatively, use the instructions in the Hammer CLI Guide to install thehammertool to the undercloud. Run the following
hammercommand to create a new product (OSP14 Containers) in your Satellite organization:hammer product create \ --organization "ACME" \ --name "OSP14 Containers"
$ hammer product create \ --organization "ACME" \ --name "OSP14 Containers"Copy to Clipboard Copied! Toggle word wrap Toggle overflow This custom product will contain our images.
Add the base container image to the product:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the overcloud container images from the
satellite_imagesfile.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Synchronize the container images:
hammer product synchronize \ --organization "ACME" \ --name "OSP14 Containers"
$ hammer product synchronize \ --organization "ACME" \ --name "OSP14 Containers"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the Satellite server to complete synchronization.
NoteDepending on your configuration,
hammermight ask for your Satellite server username and password. You can configurehammerto automatically login using a configuration file. For more information, see the "Authentication" section in the Hammer CLI Guide.-
If your Satellite 6 server uses content views, create a new content view version to incorporate the images and promote it along environments in your application life cycle. This largely depends on how you structure your application lifecycle. For example, if you have an environment called
productionin your lifecycle and you want the container images available in that environment, create a content view that includes the container images and promote that content view to theproductionenvironment. For more information, see "Managing Container Images with Content Views". Check the available tags for the
baseimage:hammer docker tag list --repository "base" \ --organization "ACME" \ --environment "production" \ --content-view "myosp14" \ --product "OSP14 Containers"
$ hammer docker tag list --repository "base" \ --organization "ACME" \ --environment "production" \ --content-view "myosp14" \ --product "OSP14 Containers"Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command displays tags for the OpenStack Platform container images within a content view for an particular environment.
Return to the undercloud and generate a default environment file for preparing images using your Satellite server as a source. Run the following example command to generate the environment file:
(undercloud) $ openstack tripleo container image prepare default \ --output-env-file containers-prepare-parameter.yaml
(undercloud) $ openstack tripleo container image prepare default \ --output-env-file containers-prepare-parameter.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
--output-env-fileis an environment file name. The contents of this file will include the parameters for preparing your container images for the undercloud. In this case, the name of the file iscontainers-prepare-parameter.yaml.
-
Edit the
containers-prepare-parameter.yamlfile and modify the following parameters:-
namespace- The URL and port of the registry on the Satellite server. The default registry port on Red Hat Satellite is 5000. name_prefix- The prefix is based on a Satellite 6 convention. This differs depending on whether you use content views:-
If you use content views, the structure is
[org]-[environment]-[content view]-[product]-. For example:acme-production-myosp14-osp14_containers-. -
If you do not use content views, the structure is
[org]-[product]-. For example:acme-osp14_containers-.
-
If you use content views, the structure is
-
ceph_namespace,ceph_image,ceph_tag- If using Ceph Storage, include the additional parameters to define the Ceph Storage container image location. Note thatceph_imagenow includes a Satellite-specific prefix. This prefix is the same value as thename_prefixoption.
-
The following example environment file contains Satellite-specific parameters:
Use this environment file when creating both your undercloud and overcloud.
Chapter 3. Installing the undercloud with containers Copy linkLink copied to clipboard!
This chapter provides info on how to create a container-based undercloud and keep it updated.
3.1. Configuring the director Copy linkLink copied to clipboard!
The director installation process requires certain settings in the undercloud.conf configuration file, which the director reads from the stack user’s home directory. This procedure demonstrates how to use the default template as a foundation for your configuration.
Procedure
Copy the default template to the
stackuser’s home directory:cp \ /usr/share/python-tripleoclient/undercloud.conf.sample \ ~/undercloud.conf
[stack@director ~]$ cp \ /usr/share/python-tripleoclient/undercloud.conf.sample \ ~/undercloud.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Edit the
undercloud.conffile. This file contains settings to configure your undercloud. If you omit or comment out a parameter, the undercloud installation uses the default value.
3.2. Director configuration parameters Copy linkLink copied to clipboard!
The following list contains information about parameters for configuring the undercloud.conf file. Keep all parameters within their relevant sections to avoid errors.
Defaults
The following parameters are defined in the [DEFAULT] section of the undercloud.conf file:
- additional_architectures
A list of additional (kernel) architectures that an overcloud supports. Currently the overcloud supports
ppc64learchitecture.NoteWhen enabling support for ppc64le, you must also set
ipxe_enabledtoFalse- certificate_generation_ca
-
The
certmongernickname of the CA that signs the requested certificate. Use this option only if you have set thegenerate_service_certificateparameter. If you select thelocalCA, certmonger extracts the local CA certificate to/etc/pki/ca-trust/source/anchors/cm-local-ca.pemand adds the certificate to the trust chain. - clean_nodes
- Defines whether to wipe the hard drive between deployments and after introspection.
- cleanup
-
Cleanup temporary files. Set this to
Falseto leave the temporary files used during deployment in place after the command is run. This is useful for debugging the generated files or if errors occur. - container_images_file
Heat environment file with container image information. This can either be:
- Parameters for all required container images
-
Or the
ContainerImagePrepareparameter to drive the required image preparation. Usually the file containing this parameter is namedcontainers-prepare-parameter.yaml.
- custom_env_files
- Additional environment file to add to the undercloud installation.
- deployment_user
-
The user installing the undercloud. Leave this parameter unset to use the current default user (
stack). - discovery_default_driver
-
Sets the default driver for automatically enrolled nodes. Requires
enable_node_discoveryenabled and you must include the driver in theenabled_hardware_typeslist. - docker_insecure_registries
-
A list of insecure registries for
dockerto use. Use this parameter if you want to pull images from another source, such as a private container registry. In most cases, docker has the certificates to pull container images from either the Red Hat Container Catalog or from your Satellite server if the undercloud is registered to Satellite. - docker_registry_mirror
-
An optional
registry-mirrorconfigured in/etc/docker/daemon.json - enable_ironic; enable_ironic_inspector; enable_mistral; enable_tempest; enable_validations; enable_zaqar
-
Defines the core services to enable for director. Leave these parameters set to
true. - enable_ui
-
Defines whether to install the director web UI. Use this parameter to perform overcloud planning and deployments through a graphical web interface. Note that the UI is only available with SSL/TLS enabled using either the
undercloud_service_certificateorgenerate_service_certificate. - enable_node_discovery
-
Automatically enroll any unknown node that PXE-boots the introspection ramdisk. New nodes use the
fake_pxedriver as a default but you can setdiscovery_default_driverto override. You can also use introspection rules to specify driver information for newly enrolled nodes. - enable_novajoin
-
Defines whether to install the
novajoinmetadata service in the Undercloud. - enable_routed_networks
- Defines whether to enable support for routed control plane networks.
- enable_swift_encryption
- Defines whether to enable Swift encryption at-rest.
- enable_telemetry
-
Defines whether to install OpenStack Telemetry services (gnocchi, aodh, panko) in the undercloud. Set
enable_telemetryparameter totrueif you want to install and configure telemetry services automatically. The default value isfalse, which disables telemetry on the undercloud. This parameter is required if using other products that consume metrics data, such as Red Hat CloudForms. - enabled_hardware_types
- A list of hardware types to enable for the undercloud.
- generate_service_certificate
-
Defines whether to generate an SSL/TLS certificate during the undercloud installation, which is used for the
undercloud_service_certificateparameter. The undercloud installation saves the resulting certificate/etc/pki/tls/certs/undercloud-[undercloud_public_vip].pem. The CA defined in thecertificate_generation_caparameter signs this certificate. - heat_container_image
- URL for the heat container image to use. Leave unset.
- heat_native
-
Use native heat templates. Leave as
true. - hieradata_override
-
Path to
hieradataoverride file that configures Puppet hieradata on the director, providing custom configuration to services beyond theundercloud.confparameters. If set, the undercloud installation copies this file to the/etc/puppet/hieradatadirectory and sets it as the first file in the hierarchy. See Configuring hieradata on the undercloud for details on using this feature. - inspection_extras
-
Defines whether to enable extra hardware collection during the inspection process. This parameter requires
python-hardwareorpython-hardware-detectpackage on the introspection image. - inspection_interface
-
The bridge the director uses for node introspection. This is a custom bridge that the director configuration creates. The
LOCAL_INTERFACEattaches to this bridge. Leave this as the defaultbr-ctlplane. - inspection_runbench
-
Runs a set of benchmarks during node introspection. Set this parameter to
trueto enable the benchmarks. This option is necessary if you intend to perform benchmark analysis when inspecting the hardware of registered nodes. - ipa_otp
-
Defines the one time password to register the Undercloud node to an IPA server. This is required when
enable_novajoinis enabled. - ipxe_enabled
-
Defines whether to use iPXE or standard PXE. The default is
true, which enables iPXE. Set tofalseto set to standard PXE. - local_interface
The chosen interface for the director’s Provisioning NIC. This is also the device the director uses for DHCP and PXE boot services. Change this value to your chosen device. To see which device is connected, use the
ip addrcommand. For example, this is the result of anip addrcommand:Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, the External NIC uses
eth0and the Provisioning NIC useseth1, which is currently not configured. In this case, set thelocal_interfacetoeth1. The configuration script attaches this interface to a custom bridge defined with theinspection_interfaceparameter.- local_ip
-
The IP address defined for the director’s Provisioning NIC. This is also the IP address that the director uses for DHCP and PXE boot services. Leave this value as the default
192.168.24.1/24unless you use a different subnet for the Provisioning network, for example, if it conflicts with an existing IP address or subnet in your environment. - local_mtu
-
MTU to use for the
local_interface. Do not exceed 1500 for the undercloud. - local_subnet
-
The local subnet to use for PXE boot and DHCP interfaces. The
local_ipaddress should reside in this subnet. The default isctlplane-subnet. - net_config_override
-
Path to network configuration override template. If you set this parameter, the undercloud uses a JSON format template to configure the networking with
os-net-config. The undercloud ignores the network parameters set inundercloud.conf. See/usr/share/python-tripleoclient/undercloud.conf.samplefor an example. - output_dir
- Directory to output state, processed heat templates, and Ansible deployment files.
- overcloud_domain_name
The DNS domain name to use when deploying the overcloud.
NoteWhen configuring the overcloud, the
CloudDomainparameter must be set to a matching value. Set this parameter in an environment file when you configure your overcloud.- roles_file
- The roles file to override for undercloud installation. It is highly recommended to leave unset so that the director installation uses the default roles file.
- scheduler_max_attempts
- Maximum number of times the scheduler attempts to deploy an instance. This value must be greater or equal to the number of bare metal nodes that you expect to deploy at once to work around potential race condition when scheduling.
- service_principal
- The Kerberos principal for the service using the certificate. Use this parameter only if your CA requires a Kerberos principal, such as in FreeIPA.
- subnets
-
List of routed network subnets for provisioning and introspection. See Subnets for more information. The default value includes only the
ctlplane-subnetsubnet. - templates
- Heat templates file to override.
- undercloud_admin_host
-
The IP address defined for the director Admin API when using SSL/TLS. This is an IP address for administration endpoint access over SSL/TLS. The director configuration attaches the director’s IP address to its software bridge as a routed IP address, which uses the
/32netmask. - undercloud_debug
-
Sets the log level of undercloud services to
DEBUG. Set this value totrueto enable. - undercloud_enable_selinux
-
Enable or disable SELinux during the deployment. It is highly recommended to leave this value set to
trueunless you are debugging an issue. - undercloud_hostname
- Defines the fully qualified host name for the undercloud. If set, the undercloud installation configures all system host name settings. If left unset, the undercloud uses the current host name, but the user must configure all system host name settings appropriately.
- undercloud_log_file
-
The path to a log file to store the undercloud install/upgrade logs. By default, the log file is
install-undercloud.logwithin the home directory. For example,/home/stack/install-undercloud.log. - undercloud_nameservers
- A list of DNS nameservers to use for the undercloud hostname resolution.
- undercloud_ntp_servers
- A list of network time protocol servers to help synchronize the undercloud date and time.
- undercloud_public_host
-
The IP address defined for the director Public API when using SSL/TLS. This is an IP address for accessing the director endpoints externally over SSL/TLS. The director configuration attaches this IP address to the director software bridge as a routed IP address, which uses the
/32netmask. - undercloud_service_certificate
- The location and filename of the certificate for OpenStack SSL/TLS communication. Ideally, you obtain this certificate from a trusted certificate authority. Otherwise, generate your own self-signed certificate.
- undercloud_update_packages
- Defines whether to update packages during the undercloud installation.
Subnets
Each provisioning subnet is a named section in the undercloud.conf file. For example, to create a subnet called ctlplane-subnet, use the following sample in your undercloud.conf file:
You can specify as many provisioning networks as necessary to suit your environment.
- gateway
-
The gateway for the overcloud instances. This is the undercloud host, which forwards traffic to the External network. Leave this as the default
192.168.24.1unless you use a different IP address for the director or want to use an external gateway directly.
The director configuration also enables IP forwarding automatically using the relevant sysctl kernel parameter.
- cidr
-
The network that the director uses to manage overcloud instances. This is the Provisioning network, which the undercloud
neutronservice manages. Leave this as the default192.168.24.0/24unless you use a different subnet for the Provisioning network. - masquerade
-
Defines whether to masquerade the network defined in the
cidrfor external access. This provides the Provisioning network with a degree of network address translation (NAT) so that the Provisioning network has external access through the director. - dhcp_start; dhcp_end
- The start and end of the DHCP allocation range for overcloud nodes. Ensure this range contains enough IP addresses to allocate your nodes.
Modify the values for these parameters to suit your configuration. When complete, save the file.
3.3. Installing the director Copy linkLink copied to clipboard!
Complete the following procedure to install the director and perform some basic post-installation tasks.
Procedure
Run the following command to install the director on the undercloud:
openstack undercloud install
[stack@director ~]$ openstack undercloud installCopy to Clipboard Copied! Toggle word wrap Toggle overflow This launches the director’s configuration script. The director installs additional packages and configures its services according to the configuration in the
undercloud.conf. This script takes several minutes to complete.The script generates two files when complete:
-
undercloud-passwords.conf- A list of all passwords for the director’s services. -
stackrc- A set of initialization variables to help you access the director’s command line tools.
-
The script also starts all OpenStack Platform service containers automatically. Check the enabled containers using the following command:
sudo docker ps
[stack@director ~]$ sudo docker psCopy to Clipboard Copied! Toggle word wrap Toggle overflow The script adds the
stackuser to thedockergroup to give thestackuser access to container management commands. Refresh thestackuser’s permissions with the following command:exec su -l stack
[stack@director ~]$ exec su -l stackCopy to Clipboard Copied! Toggle word wrap Toggle overflow The command prompts you to log in again. Enter the stack user’s password.
To initialize the
stackuser to use the command line tools, run the following command:source ~/stackrc
[stack@director ~]$ source ~/stackrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow The prompt now indicates OpenStack commands authenticate and execute against the undercloud;
(undercloud) [stack@director ~]$Copy to Clipboard Copied! Toggle word wrap Toggle overflow
The director installation is complete. You can now use the director’s command line tools.
3.4. Performing a minor update of a containerized undercloud Copy linkLink copied to clipboard!
The director provides commands to update the packages on the undercloud node. This allows you to perform a minor update within the current version of your OpenStack Platform environment.
Procedure
-
Log into the director as the
stackuser. Run
yumto upgrade the director’s main packages:sudo yum update -y python-tripleoclient* openstack-tripleo-common openstack-tripleo-heat-templates
$ sudo yum update -y python-tripleoclient* openstack-tripleo-common openstack-tripleo-heat-templatesCopy to Clipboard Copied! Toggle word wrap Toggle overflow The director uses the
openstack undercloud upgradecommand to update the undercloud environment. Run the command:openstack undercloud upgrade
$ openstack undercloud upgradeCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until the undercloud upgrade process completes.
Reboot the undercloud to update the operating system’s kernel and other system packages:
sudo reboot
$ sudo rebootCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until the node boots.
Chapter 4. Deploying and updating an overcloud with containers Copy linkLink copied to clipboard!
This chapter provides info on how to create a container-based overcloud and keep it updated.
4.1. Deploying an overcloud Copy linkLink copied to clipboard!
This procedure demonstrates how to deploy an overcloud with minimum configuration. The result will be a basic two-node overcloud (1 Controller node, 1 Compute node).
Procedure
Source the
stackrcfile:source ~/stackrc
$ source ~/stackrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
deploycommand and include the file containing your overcloud image locations (usuallyovercloud_images.yaml):(undercloud) $ openstack overcloud deploy --templates \ -e /home/stack/templates/overcloud_images.yaml \ --ntp-server pool.ntp.org
(undercloud) $ openstack overcloud deploy --templates \ -e /home/stack/templates/overcloud_images.yaml \ --ntp-server pool.ntp.orgCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until the overcloud completes deployment.
4.2. Updating an overcloud Copy linkLink copied to clipboard!
For information on updating a containerized overcloud, see the Keeping Red Hat OpenStack Platform Updated guide.
Chapter 5. Working with containerized services Copy linkLink copied to clipboard!
This chapter provides some examples of commands to manage containers and how to troubleshoot your OpenStack Platform containers
5.1. Managing containerized services Copy linkLink copied to clipboard!
OpenStack Platform runs services in containers on the undercloud and overcloud nodes. In certain situations, you might need to control the individual services on a host. This section contains information about some common docker commands you can run on a node to manage containerized services. For more comprehensive information about using docker to manage containers, see "Working with Docker formatted containers" in the Getting Started with Containers guide.
Listing containers and images
To list running containers, run the following command:
sudo docker ps
$ sudo docker ps
To include stopped or failed containers in the command output, add the --all option to the command:
sudo docker ps --all
$ sudo docker ps --all
To list container images, run the following command:
sudo docker images
$ sudo docker images
Inspecting container properties
To view the properties of a container or container images, use the docker inspect command. For example, to inspect the keystone container, run the following command:
sudo docker inspect keystone
$ sudo docker inspect keystone
Managing basic container operations
To restart a containerized service, use the docker restart command. For example, to restart the keystone container, run the following command:
sudo docker restart keystone
$ sudo docker restart keystone
To stop a containerized service, use the docker stop command. For example, to stop the keystone container, run the following command:
sudo docker stop keystone
$ sudo docker stop keystone
To start a stopped containerized service, use the docker start command. For example, to start the keystone container, run the following command:
sudo docker start keystone
$ sudo docker start keystone
Any changes to the service configuration files within the container revert after restarting the container. This is because the container regenerates the service configuration based on files on the node’s local file system in /var/lib/config-data/puppet-generated/. For example, if you edit /etc/keystone/keystone.conf within the keystone container and restart the container, the container regenerates the configuration using /var/lib/config-data/puppet-generated/keystone/etc/keystone/keystone.conf on the node’s local file system, which overwrites any the changes made within the container before the restart.
Monitoring containers
To check the logs for a containerized service, use the docker logs command. For example, to view the logs for the keystone container, run the following command:
sudo docker logs keystone
$ sudo docker logs keystone
Accessing containers
To enter the shell for a containerized service, use the docker exec command to launch /bin/bash. For example, to enter the shell for the keystone container, run the following command:
sudo docker exec -it keystone /bin/bash
$ sudo docker exec -it keystone /bin/bash
To enter the shell for the keystone container as the root user, run the following command:
sudo docker exec --user 0 -it <NAME OR ID> /bin/bash
$ sudo docker exec --user 0 -it <NAME OR ID> /bin/bash
To exit from the container, run the following command:
exit
# exit
Enabling swift-ring-builder on undercloud and overcloud
For continuity considerations in Object Storage (swift) builds, the swift-ring-builder and swift_object_server commands are no longer packaged on the undercloud or overcloud nodes. However, the commands are still available in the containers. To run them inside the respective containers:
docker exec -ti -u swift swift_object_server swift-ring-builder /etc/swift/object.builder
docker exec -ti -u swift swift_object_server swift-ring-builder /etc/swift/object.builder
If you require these commands, install the following package as the stack user on the undercloud or the heat-admin user on the overcloud:
sudo yum install -y python-swift sudo yum install -y python2-swiftclient
sudo yum install -y python-swift
sudo yum install -y python2-swiftclient
5.2. Troubleshooting containerized services Copy linkLink copied to clipboard!
If a containerized service fails during or after overcloud deployment, use the following recommendations to determine the root cause for the failure:
Before running these commands, check that you are logged into an overcloud node and not running these commands on the undercloud.
Checking the container logs
Each container retains standard output from its main process. This output acts as a log to help determine what actually occurs during a container run. For example, to view the log for the keystone container, use the following command:
sudo docker logs keystone
$ sudo docker logs keystone
In most cases, this log provides the cause of a container’s failure.
Inspecting the container
In some situations, you might need to verify information about a container. For example, use the following command to view keystone container data:
sudo docker inspect keystone
$ sudo docker inspect keystone
This provides a JSON object containing low-level configuration data. You can pipe the output to the jq command to parse specific data. For example, to view the container mounts for the keystone container, run the following command:
sudo docker inspect keystone | jq .[0].Mounts
$ sudo docker inspect keystone | jq .[0].Mounts
You can also use the --format option to parse data to a single line, which is useful for running commands against sets of container data. For example, to recreate the options used to run the keystone container, use the following inspect command with the --format option:
sudo docker inspect --format='{{range .Config.Env}} -e "{{.}}" {{end}} {{range .Mounts}} -v {{.Source}}:{{.Destination}}{{if .Mode}}:{{.Mode}}{{end}}{{end}} -ti {{.Config.Image}}' keystone
$ sudo docker inspect --format='{{range .Config.Env}} -e "{{.}}" {{end}} {{range .Mounts}} -v {{.Source}}:{{.Destination}}{{if .Mode}}:{{.Mode}}{{end}}{{end}} -ti {{.Config.Image}}' keystone
The --format option uses Go syntax to create queries.
Use these options in conjunction with the docker run command to recreate the container for troubleshooting purposes:
OPTIONS=$( sudo docker inspect --format='{{range .Config.Env}} -e "{{.}}" {{end}} {{range .Mounts}} -v {{.Source}}:{{.Destination}}{{if .Mode}}:{{.Mode}}{{end}}{{end}} -ti {{.Config.Image}}' keystone )
sudo docker run --rm $OPTIONS /bin/bash
$ OPTIONS=$( sudo docker inspect --format='{{range .Config.Env}} -e "{{.}}" {{end}} {{range .Mounts}} -v {{.Source}}:{{.Destination}}{{if .Mode}}:{{.Mode}}{{end}}{{end}} -ti {{.Config.Image}}' keystone )
$ sudo docker run --rm $OPTIONS /bin/bash
Running commands in the container
In some cases, you might need to obtain information from within a container through a specific Bash command. In this situation, use the following docker command to execute commands within a running container. For example, to run a command in the keystone container:
sudo docker exec -ti keystone <COMMAND>
$ sudo docker exec -ti keystone <COMMAND>
The -ti options run the command through an interactive pseudoterminal.
Replace <COMMAND> with your desired command. For example, each container has a health check script to verify the service connection. You can run the health check script for keystone with the following command:
sudo docker exec -ti keystone /openstack/healthcheck
$ sudo docker exec -ti keystone /openstack/healthcheck
To access the container’s shell, run docker exec using /bin/bash as the command:
sudo docker exec -ti keystone /bin/bash
$ sudo docker exec -ti keystone /bin/bash
Exporting a container
When a container fails, you might need to investigate the full contents of the file. In this case, you can export the full file system of a container as a tar archive. For example, to export the keystone container’s file system, run the following command:
sudo docker export keystone -o keystone.tar
$ sudo docker export keystone -o keystone.tar
This command create the keystone.tar archive, which you can extract and explore.
Chapter 6. Comparing Systemd services to containerized services Copy linkLink copied to clipboard!
This chapter provides some reference material to show how containerized services differ from Systemd services.
6.1. Systemd service commands vs containerized service commands Copy linkLink copied to clipboard!
The following table shows some similarities between Systemd-based commands and their Docker equivalents. This helps identify the type of service operation you aim to perform.
| Function | Systemd-based | Docker-based |
|---|---|---|
| List all services |
|
|
| List active services |
|
|
| Check status of service |
|
|
| Stop service |
|
|
| Start service |
|
|
| Restart service |
|
|
| Show service configuration |
|
|
| Show service logs |
|
|
6.2. Systemd services vs containerized services Copy linkLink copied to clipboard!
The following table shows Systemd-based OpenStack services and their container-based equivalents.
| OpenStack service | Systemd services | Docker containers |
|---|---|---|
| aodh |
|
|
| ceilometer |
|
|
| cinder |
|
|
| glance |
|
|
| gnocchi |
|
|
| heat |
|
|
| horizon |
|
|
| keystone |
|
|
| neutron |
|
|
| nova |
|
|
| panko |
| |
| swift |
|
|
6.3. Systemd log locations vs containerized log locations Copy linkLink copied to clipboard!
The following table shows Systemd-based OpenStack logs and their equivalents for containers. All container-based log locations are available on the physical host and are mounted to the container.
| OpenStack service | Systemd service logs | Docker container logs |
|---|---|---|
| aodh |
|
|
| ceilometer |
|
|
| cinder |
|
|
| glance |
|
|
| gnocchi |
|
|
| heat |
|
|
| horizon |
|
|
| keystone |
|
|
| databases |
|
|
| neutron |
|
|
| nova |
|
|
| panko |
| |
| rabbitmq |
|
|
| redis |
|
|
| swift |
|
|
6.4. Systemd configuration vs containerized configuration Copy linkLink copied to clipboard!
The following table shows Systemd-based OpenStack configuration and their equivalents for containers. All container-based configuration locations are available on the physical host, are mounted to the container, and are merged (via kolla) into the configuration within each respective container.
| OpenStack service | Systemd service configuration | Docker container configuration |
|---|---|---|
| aodh |
|
|
| ceilometer |
|
|
| cinder |
|
|
| glance |
|
|
| gnocchi |
|
|
| haproxy |
|
|
| heat |
|
|
| horizon |
|
|
| keystone |
|
|
| databases |
|
|
| neutron |
|
|
| nova |
|
|
| panko |
| |
| rabbitmq |
|
|
| redis |
|
|
| swift |
|
|