Chapter 9. Performing basic overcloud administration tasks
This chapter contains information about basic tasks you might need to perform during the lifecycle of your overcloud.
9.1. Accessing overcloud nodes through SSH
You can access each overcloud node through the SSH protocol.
-
Each overcloud node contains a
tripleo-admin
user, formerly known as theheat-admin
user. -
The
stack
user on the undercloud has key-based SSH access to thetripleo-admin
user on each overcloud node. -
All overcloud nodes have a short hostname that the undercloud resolves to an IP address on the control plane network. Each short hostname uses a
.ctlplane
suffix. For example, the short name forovercloud-controller-0
isovercloud-controller-0.ctlplane
Prerequisites
- A deployed overcloud with a working control plane network.
Procedure
-
Log in to the undercloud as the
stack
user. Find the name of the node that you want to access:
metalsmith list
(undercloud)$ metalsmith list
Copy to Clipboard Copied! Connect to the node as the
tripleo-admin
user:ssh tripleo-admin@overcloud-controller-0.ctlplane
(undercloud)$ ssh tripleo-admin@overcloud-controller-0.ctlplane
Copy to Clipboard Copied!
9.2. Managing containerized services
Red Hat OpenStack Platform (RHOSP) runs services in containers on the undercloud and overcloud nodes. In certain situations, you might need to control the individual services on a host. This section contains information about some common commands you can run on a node to manage containerized services.
Listing containers and images
To list running containers, run the following command:
sudo podman ps
$ sudo podman ps
To include stopped or failed containers in the command output, add the --all
option to the command:
sudo podman ps --all
$ sudo podman ps --all
To list container images, run the following command:
sudo podman images
$ sudo podman images
Inspecting container properties
To view the properties of a container or container images, use the podman inspect
command. For example, to inspect the keystone
container, run the following command:
sudo podman inspect keystone
$ sudo podman inspect keystone
Managing containers with Systemd services
Previous versions of OpenStack Platform managed containers with Docker and its daemon. Now, the Systemd services interface manages the lifecycle of the containers. Each container is a service and you run Systemd commands to perform specific operations for each container.
It is not recommended to use the Podman CLI to stop, start, and restart containers because Systemd applies a restart policy. Use Systemd service commands instead.
To check a container status, run the systemctl status
command:
sudo systemctl status tripleo_keystone
$ sudo systemctl status tripleo_keystone
● tripleo_keystone.service - keystone container
Loaded: loaded (/etc/systemd/system/tripleo_keystone.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2019-02-15 23:53:18 UTC; 2 days ago
Main PID: 29012 (podman)
CGroup: /system.slice/tripleo_keystone.service
└─29012 /usr/bin/podman start -a keystone
To stop a container, run the systemctl stop
command:
sudo systemctl stop tripleo_keystone
$ sudo systemctl stop tripleo_keystone
To start a container, run the systemctl start
command:
sudo systemctl start tripleo_keystone
$ sudo systemctl start tripleo_keystone
To restart a container, run the systemctl restart
command:
sudo systemctl restart tripleo_keystone
$ sudo systemctl restart tripleo_keystone
Because no daemon monitors the containers status, Systemd automatically restarts most containers in these situations:
-
Clean exit code or signal, such as running
podman stop
command. - Unclean exit code, such as the podman container crashing after a start.
- Unclean signals.
- Timeout if the container takes more than 1m 30s to start.
For more information about Systemd services, see the systemd.service
documentation.
Any changes to the service configuration files within the container revert after restarting the container. This is because the container regenerates the service configuration based on files on the local file system of the node in /var/lib/config-data/puppet-generated/
. For example, if you edit /etc/keystone/keystone.conf
within the keystone
container and restart the container, the container regenerates the configuration using /var/lib/config-data/puppet-generated/keystone/etc/keystone/keystone.conf
on the local file system of the node, which overwrites any the changes that were made within the container before the restart.
Monitoring podman containers with podman healthcheck
You can use the podman healthcheck
command to check the health of a RHOSP service container:
Example:
sudo podman healthcheck run keystone
$ sudo podman healthcheck run keystone
Health checks are not configured for every service container. If you run the podman healthcheck
command on a container that does not have a health check defined, you receive an error to indicate that the container has no defined health check.
Checking container logs
Red Hat OpenStack Platform 17.1 logs all standard output (stdout) from all containers, and standard errors (stderr) consolidated inone single file for each container in /var/log/containers/stdout
.
The host also applies log rotation to this directory, which prevents huge files and disk space issues.
In case a container is replaced, the new container outputs to the same log file, because podman
uses the container name instead of container ID.
You can also check the logs for a containerized service with the podman logs
command. For example, to view the logs for the keystone
container, run the following command:
sudo podman logs keystone
$ sudo podman logs keystone
Accessing containers
To enter the shell for a containerized service, use the podman exec
command to launch /bin/bash
. For example, to enter the shell for the keystone
container, run the following command:
sudo podman exec -it keystone /bin/bash
$ sudo podman exec -it keystone /bin/bash
To enter the shell for the keystone
container as the root user, run the following command:
sudo podman exec --user 0 -it <NAME OR ID> /bin/bash
$ sudo podman exec --user 0 -it <NAME OR ID> /bin/bash
To exit the container, run the following command:
exit
# exit
9.3. Modifying the overcloud environment
You can modify the overcloud to add additional features or alter existing operations.
Procedure
To modify the overcloud, make modifications to your custom environment files and heat templates, then rerun the
openstack overcloud deploy
command from your initial overcloud creation. For example, if you created an overcloud using Section 7.3, “Configuring and deploying the overcloud”, rerun the following command:source ~/stackrc
$ source ~/stackrc (undercloud) $ openstack overcloud deploy --templates \ -e ~/templates/overcloud-baremetal-deployed.yaml \ -e ~/templates/network-environment.yaml \ -e ~/templates/storage-environment.yaml \ --ntp-server pool.ntp.org
Copy to Clipboard Copied! Director checks the
overcloud
stack in heat, and then updates each item in the stack with the environment files and heat templates. Director does not recreate the overcloud, but rather changes the existing overcloud.ImportantRemoving parameters from custom environment files does not revert the parameter value to the default configuration. You must identify the default value from the core heat template collection in
/usr/share/openstack-tripleo-heat-templates
and set the value in your custom environment file manually.If you want to include a new environment file, add it to the
openstack overcloud deploy
command with the`-e` option. For example:source ~/stackrc
$ source ~/stackrc (undercloud) $ openstack overcloud deploy --templates \ -e ~/templates/new-environment.yaml \ -e ~/templates/network-environment.yaml \ -e ~/templates/storage-environment.yaml \ -e ~/templates/overcloud-baremetal-deployed.yaml \ --ntp-server pool.ntp.org
Copy to Clipboard Copied! This command includes the new parameters and resources from the environment file into the stack.
ImportantIt is not advisable to make manual modifications to the overcloud configuration because director might overwrite these modifications later.
9.4. Importing virtual machines into the overcloud
You can migrate virtual machines from an existing OpenStack environment to your Red Hat OpenStack Platform (RHOSP) environment.
Procedure
On the existing OpenStack environment, create a new image by taking a snapshot of a running server and download the image:
openstack server image create --name <image_name> <instance_name> openstack image save --file <exported_vm.qcow2> <image_name>
$ openstack server image create --name <image_name> <instance_name> $ openstack image save --file <exported_vm.qcow2> <image_name>
Copy to Clipboard Copied! -
Replace
<instance_name>
with the name of the instance. -
Replace
<image_name>
with the name of the new image. -
Replace
<exported_vm.qcow2>
with the name of the exported virtual machine.
-
Replace
Copy the exported image to the undercloud node:
scp exported_vm.qcow2 stack@192.168.0.2:~/.
$ scp exported_vm.qcow2 stack@192.168.0.2:~/.
Copy to Clipboard Copied! -
Log in to the undercloud as the
stack
user. Source the
overcloudrc
credentials file:source ~/overcloudrc
$ source ~/overcloudrc
Copy to Clipboard Copied! Upload the exported image into the overcloud:
(overcloud) $ openstack image create --disk-format qcow2 -file <exported_vm.qcow2> --container-format bare <image_name>
(overcloud) $ openstack image create --disk-format qcow2 -file <exported_vm.qcow2> --container-format bare <image_name>
Copy to Clipboard Copied! Launch a new instance:
(overcloud) $ openstack server create --key-name default --flavor m1.demo --image imported_image --nic net-id=net_id <instance_name>
(overcloud) $ openstack server create --key-name default --flavor m1.demo --image imported_image --nic net-id=net_id <instance_name>
Copy to Clipboard Copied!
You can use these commands to copy each virtual machine disk from the existing OpenStack environment to the new Red Hat OpenStack Platform. QCOW snapshots lose their original layering system.
9.5. Launching the ephemeral heat process
In previous versions of Red Hat OpenStack Platform (RHOSP) a system-installed Heat process was used to install the overcloud. Now, we use ephemeral Heat to install the overcloud meaning that the heat-api
and heat-engine
processes are started on demand by the deployment
, update
, and upgrade
commands.
Previously, you used the openstack stack
command to create and manage stacks. This command is no longer available by default. For troubleshooting and debugging purposes, for example if the stack should fail, you must first launch the ephemeral Heat process to use the openstack stack
commands.
Use the openstack tripleo launch heat
command to enable ephemeral heat outside of a deployment.
Procedure
Launch the ephemeral Heat process:
openstack tripleo launch heat --heat-dir /home/stack/overcloud-deploy/<overcloud>/heat-launcher --restore-db
(undercloud)$ openstack tripleo launch heat --heat-dir /home/stack/overcloud-deploy/<overcloud>/heat-launcher --restore-db
Copy to Clipboard Copied! -
Replace
<overcloud>
with the name of your overcloud stack.
NoteThe command exits after launching the Heat process, and the Heat process continues to run in the background as a Podman pod.
-
Replace
Verify that the
ephemeral-heat
process is running:sudo podman pod ps
(undercloud)$ sudo podman pod ps POD ID NAME STATUS CREATED INFRA ID # OF CONTAINERS 958b141609b2 ephemeral-heat Running 2 minutes ago 44447995dbcf 3
Copy to Clipboard Copied! Export the
OS_CLOUD
environment:export OS_CLOUD=heat
(undercloud)$ export OS_CLOUD=heat
Copy to Clipboard Copied! List the installed stacks:
openstack stack list
(undercloud)$ openstack stack list +--------------------------------------+------------+---------+-----------------+----------------------+--------------+ | ID | Stack Name | Project | Stack Status | Creation Time | Updated Time | +--------------------------------------+------------+---------+-----------------+----------------------+--------------+ | 761e2a54-c6f9-4e0f-abe6-c8e0ad51a76c | overcloud | admin | CREATE_COMPLETE | 2022-08-29T20:48:37Z | None | +--------------------------------------+------------+---------+-----------------+----------------------+--------------+
Copy to Clipboard Copied! You can debug with commands such as
openstack stack environment show
andopenstack stack resource list
.After you have finished debugging, stop the ephemeral Heat process:
openstack tripleo launch heat --kill
(undercloud)$ openstack tripleo launch heat --kill
Copy to Clipboard Copied!
Sometimes, exporting the heat environment fails. This can happen when other credentials, such as overcloudrc
, are in use. In this case unset the existing environment and source the heat environment.
unset OS_CLOUD unset OS_PROJECT_NAME unset OS_PROJECT_DOMAIN_NAME unset OS_USER_DOMAIN_NAME OS_AUTH_TYPE=none OS_ENDPOINT=http://127.0.0.1:8006/v1/admin export OS_CLOUD=heat
(overcloud)$ unset OS_CLOUD
(overcloud)$ unset OS_PROJECT_NAME
(overcloud)$ unset OS_PROJECT_DOMAIN_NAME
(overcloud)$ unset OS_USER_DOMAIN_NAME
(overcloud)$ OS_AUTH_TYPE=none
(overcloud)$ OS_ENDPOINT=http://127.0.0.1:8006/v1/admin
(overcloud)$ export OS_CLOUD=heat
9.6. Running the dynamic inventory script
You can run Ansible-based automation in your Red Hat OpenStack Platform (RHOSP) environment. Use the tripleo-ansible-inventory.yaml
inventory file located in the /home/stack/overcloud-deploy/<stack>
directory to run ansible plays or ad-hoc commands.
If you want to run an Ansible playbook or an Ansible ad-hoc command on the undercloud, you must use the /home/stack/tripleo-deploy/undercloud/tripleo-ansible-inventory.yaml
inventory file.
Procedure
To view your inventory of nodes, run the following Ansible ad-hoc command:
(undercloud) $ ansible -i ./overcloud-deploy/<stack>/tripleo-ansible-inventory.yaml all --list
(undercloud) $ ansible -i ./overcloud-deploy/<stack>/tripleo-ansible-inventory.yaml all --list
Copy to Clipboard Copied! NoteReplace stack with the name of your deployed overcloud stack.
To execute Ansible playbooks on your environment, run the
ansible
command and include the full path to inventory file using the-i
option. For example:(undercloud) $ ansible <hosts> -i ./overcloud-deploy/tripleo-ansible-inventory.yaml <playbook> <options>
(undercloud) $ ansible <hosts> -i ./overcloud-deploy/tripleo-ansible-inventory.yaml <playbook> <options>
Copy to Clipboard Copied! Replace
<hosts>
with the type of hosts that you want to use to use:-
controller
for all Controller nodes -
compute
for all Compute nodes -
overcloud
for all overcloud child nodes. For example,controller
andcompute
nodes -
"*"
for all nodes
-
Replace
<options>
with additional Ansible options.-
Use the
--ssh-extra-args='-o StrictHostKeyChecking=no'
option to bypass confirmation on host key checking. -
Use the
-u [USER]
option to change the SSH user that executes the Ansible automation. The default SSH user for the overcloud is automatically defined using theansible_ssh_user
parameter in the dynamic inventory. The-u
option overrides this parameter. -
Use the
-m [MODULE]
option to use a specific Ansible module. The default iscommand
, which executes Linux commands. -
Use the
-a [MODULE_ARGS]
option to define arguments for the chosen module.
-
Use the
Custom Ansible automation on the overcloud is not part of the standard overcloud stack. Subsequent execution of the openstack overcloud deploy
command might override Ansible-based configuration for OpenStack Platform services on overcloud nodes.
9.7. Removing an overcloud stack
You can delete an overcloud stack and unprovision all the stack nodes.
Deleting your overcloud stack does not erase all the overcloud data. If you need to erase all the overcloud data, contact Red Hat support.
Procedure
-
Log in to the undercloud host as the
stack
user. Source the
stackrc
undercloud credentials file:source ~/stackrc
$ source ~/stackrc
Copy to Clipboard Copied! Retrieve a list of all the nodes in your stack and their current status:
openstack baremetal node list
(undercloud)$ openstack baremetal node list +--------------------------------------+--------------+--------------------------------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+--------------+--------------------------------------+-------------+--------------------+-------------+ | 92ae71b0-3c31-4ebb-b467-6b5f6b0caac7 | compute-0 | 059fb1a1-53ea-4060-9a47-09813de28ea1 | power on | active | False | | 9d6f955e-3d98-4d1a-9611-468761cebabf | compute-1 | e73a4b50-9579-4fe1-bd1a-556a2c8b504f | power on | active | False | | 8a686fc1-1381-4238-9bf3-3fb16eaec6ab | controller-0 | 6d69e48d-10b4-45dd-9776-155a9b8ad575 | power on | active | False | | eb8083cc-5f8f-405f-9b0c-14b772ce4534 | controller-1 | 1f836ac0-a70d-4025-88a3-bbe0583b4b8e | power on | active | False | | a6750f1f-8901-41d6-b9f1-f5d6a10a76c7 | controller-2 | e2edd028-cea6-4a98-955e-5c392d91ed46 | power on | active | False | +--------------------------------------+--------------+--------------------------------------+-------------+--------------------+-------------+
Copy to Clipboard Copied! Delete the overcloud stack and unprovision the nodes and networks:
openstack overcloud delete -b <node_definition_file> \ --networks-file <networks_definition_file> --network-ports <stack>
(undercloud)$ openstack overcloud delete -b <node_definition_file> \ --networks-file <networks_definition_file> --network-ports <stack>
Copy to Clipboard Copied! -
Replace
<node_definition_file>
with the name of your node definition file, for example,overcloud-baremetal-deploy.yaml
. -
Replace
<networks_definition_file>
with the name of your networks definition file, for example,network_data_v2.yaml
. -
Replace
<stack>
with the name of the stack that you want to delete. If not specified, the default stack isovercloud
.
-
Replace
Confirm that you want to delete the overcloud:
Are you sure you want to delete this overcloud [y/N]?
Are you sure you want to delete this overcloud [y/N]?
Copy to Clipboard Copied! - Wait for the overcloud to delete and the nodes and networks to unprovision.
Confirm that the bare-metal nodes have been unprovisioned:
openstack baremetal node list
(undercloud) [stack@undercloud-0 ~]$ openstack baremetal node list +--------------------------------------+--------------+---------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+--------------+---------------+-------------+--------------------+-------------+ | 92ae71b0-3c31-4ebb-b467-6b5f6b0caac7 | compute-0 | None | power off | available | False | | 9d6f955e-3d98-4d1a-9611-468761cebabf | compute-1 | None | power off | available | False | | 8a686fc1-1381-4238-9bf3-3fb16eaec6ab | controller-0 | None | power off | available | False | | eb8083cc-5f8f-405f-9b0c-14b772ce4534 | controller-1 | None | power off | available | False | | a6750f1f-8901-41d6-b9f1-f5d6a10a76c7 | controller-2 | None | power off | available | False | +--------------------------------------+--------------+---------------+-------------+--------------------+-------------+
Copy to Clipboard Copied! Remove the stack directories:
rm -rf ~/overcloud-deploy/<stack> rm -rf ~/config-download/<stack>
$ rm -rf ~/overcloud-deploy/<stack> $ rm -rf ~/config-download/<stack>
Copy to Clipboard Copied! NoteThe directory paths for your stack might be different from the default if you used the
--output-dir
and--working-dir
options when deploying the overcloud with theopenstack overcloud deploy
command.
9.8. Managing local disk partition sizes
If your local disk partitions continue to fill up after you have optimized the configuration of your partition sizes, then perform one of the following tasks:
- Manually delete files from the affected partitions.
- Add a new physical disk and add it to the LVM volume group. For more information, see Configuring and managing logical volumes.
Overprovision the partition to use the remaining spare disk space. This option is possible because the default whole disk overcloud image,
overcloud-hardened-uefi-full.qcow2
, is backed by a thin pool. For more information on thin-provisioned logical volumes, see Creating and managing thin provisioned volumes (thin volumes) in the RHEL Configuring and managing local volumes guide.WarningOnly use overprovisioning when it is not possible to manually delete files or add a new physical disk. Overprovisioning can fail during the write operation if there is insufficient free physical space.
Adding a new disk and overprovisioning the partition require a support exception. Contact the Red Hat Customer Experience and Engagement team to discuss a support exception, if applicable, or other options.