Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 10. Performing basic overcloud administration tasks
This chapter contains information about basic tasks you might need to perform during the lifecycle of your overcloud.
10.1. Managing containerized services Copier lienLien copié sur presse-papiers!
OpenStack Platform runs services in containers on the undercloud and overcloud nodes. In certain situations, you might need to control the individual services on a host. This section contains information about some common commands you can run on a node to manage containerized services.
Listing containers and images
To list running containers, run the following command:
sudo podman ps
$ sudo podman ps
To include stopped or failed containers in the command output, add the --all option to the command:
sudo podman ps --all
$ sudo podman ps --all
To list container images, run the following command:
sudo podman images
$ sudo podman images
Inspecting container properties
To view the properties of a container or container images, use the podman inspect command. For example, to inspect the keystone container, run the following command:
sudo podman inspect keystone
$ sudo podman inspect keystone
Managing containers with Systemd services
Previous versions of OpenStack Platform managed containers with Docker and its daemon. In OpenStack Platform 15, the Systemd services interface manages the lifecycle of the containers. Each container is a service and you run these commands to run specific operations for each container.
It is not recommended to use the Podman CLI to stop, start, and restart containers because Systemd applies a restart policy. Use Systemd service commands instead.
To check a container status, run the systemctl status command:
To stop a container, run the systemctl stop command:
sudo systemctl stop tripleo_keystone
$ sudo systemctl stop tripleo_keystone
To start a container, run the systemctl start command:
sudo systemctl start tripleo_keystone
$ sudo systemctl start tripleo_keystone
To restart a container, run the systemctl restart command:
sudo systemctl restart tripleo_keystone
$ sudo systemctl restart tripleo_keystone
As no daemon monitors the containers status, Systemd automatically restarts most containers in these situations:
-
Clean exit code or signal, such as running
podman stopcommand. - Unclean exit code, such as the podman container crashing after a start.
- Unclean signals.
- Timeout if the container takes more than 1m 30s to start.
For more information about Systemd services, see the systemd.service documentation.
Any changes to the service configuration files within the container revert after restarting the container. This is because the container regenerates the service configuration based on files on the node’s local file system in /var/lib/config-data/puppet-generated/. For example, if you edit /etc/keystone/keystone.conf within the keystone container and restart the container, the container regenerates the configuration using /var/lib/config-data/puppet-generated/keystone/etc/keystone/keystone.conf on the node’s local file system, which overwrites any the changes made within the container before the restart.
Monitoring podman containers with Systemd timers
The Systemd timers interface manages container health checks. Each container has a timer that runs a service unit that executes health check scripts.
To list all OpenStack Platform containers timers, run the systemctl list-timers command and limit the output to lines containing tripleo:
To check the status of a specific container timer, run the systemctl status command for the healthcheck service:
To stop, start, restart, and show the status of a container timer, run the relevant systemctl command against the .timer Systemd resource. For example, to check the status of the tripleo_keystone_healthcheck.timer resource, run the following command:
sudo systemctl status tripleo_keystone_healthcheck.timer
$ sudo systemctl status tripleo_keystone_healthcheck.timer
● tripleo_keystone_healthcheck.timer - keystone container healthcheck
Loaded: loaded (/etc/systemd/system/tripleo_keystone_healthcheck.timer; enabled; vendor preset: disabled)
Active: active (waiting) since Fri 2019-02-15 23:53:18 UTC; 2 days ago
If the healthcheck service is disabled but the timer for that service is present and enabled, it means that the check is currently timed out, but will be run according to timer. There is always a possibility to start the check manually.
The podman ps command does not show the container health status.
Checking container logs
OpenStack Platform 15 introduces a new logging directory: /var/log/containers/stdout. It contains all the containers standard output (stdout) and standard errors (stderr) consolidated in one single file per container.
Paunch and the container-puppet.py script configure podman containers to push their outputs to the /var/log/containers/stdout directory, which creates a collection of all logs, even for the deleted containers, such as container-puppet-* containers.
The host also applies log rotation to this directory, which prevents huge files and disk space issues.
In case a container is replaced, the new one outputs to the same log file, since podman is instructed to use the container name instead of container ID.
You can also check the logs for a containerized service using the podman logs command. For example, to view the logs for the keystone container, run the following command:
sudo podman logs keystone
$ sudo podman logs keystone
Accessing containers
To enter the shell for a containerized service, use the podman exec command to launch /bin/bash. For example, to enter the shell for the keystone container, run the following command:
sudo podman exec -it keystone /bin/bash
$ sudo podman exec -it keystone /bin/bash
To enter the shell for the keystone container as the root user, run the following command:
sudo podman exec --user 0 -it <NAME OR ID> /bin/bash
$ sudo podman exec --user 0 -it <NAME OR ID> /bin/bash
To exit from the container, run the following command:
exit
# exit
10.2. Modifying the overcloud environment Copier lienLien copié sur presse-papiers!
Sometimes you might want to modify the overcloud to add additional features, or change the way it operates. To modify the overcloud, make modifications to your custom environment files and Heat templates, then rerun the openstack overcloud deploy command from your initial overcloud creation. For example, if you created an overcloud using Section 6.11, “Deployment command”, rerun the following command:
The director checks the overcloud stack in heat, and then updates each item in the stack with the environment files and heat templates. The director does not recreate the overcloud, but rather changes the existing overcloud.
Removing parameters from custom environment files does not revert the parameter value to the default configuration. You must identify the default value from the core heat template collection in /usr/share/openstack-tripleo-heat-templates and set the value in your custom environment file manually.
If you aim to include a new environment file, add it to the openstack overcloud deploy command with the`-e` option. For example:
This command includes the new parameters and resources from the environment file into the stack.
It is not advisable to make manual modifications to the overcloud configuration as the director might overwrite these modifications later.
10.3. Importing virtual machines into the overcloud Copier lienLien copié sur presse-papiers!
This procedure contains steps migrate virtual machines from an existing OpenStack environment to your Red Hat OpenStack Platform environment.
Procedure
On the existing OpenStack environment, create a new image by taking a snapshot of a running server and download the image:
openstack server image create instance_name --name image_name openstack image save image_name --file exported_vm.qcow2
$ openstack server image create instance_name --name image_name $ openstack image save image_name --file exported_vm.qcow2Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the exported image to the undercloud node:
scp exported_vm.qcow2 stack@192.168.0.2:~/.
$ scp exported_vm.qcow2 stack@192.168.0.2:~/.Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Log into the undercloud as the
stackuser. Source the
overcloudrcfile:source ~/overcloudrc
$ source ~/overcloudrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Upload the exported image into the overcloud:
(overcloud) $ openstack image create imported_image --file exported_vm.qcow2 --disk-format qcow2 --container-format bare
(overcloud) $ openstack image create imported_image --file exported_vm.qcow2 --disk-format qcow2 --container-format bareCopy to Clipboard Copied! Toggle word wrap Toggle overflow Launch a new instance:
(overcloud) $ openstack server create imported_instance --key-name default --flavor m1.demo --image imported_image --nic net-id=net_id
(overcloud) $ openstack server create imported_instance --key-name default --flavor m1.demo --image imported_image --nic net-id=net_idCopy to Clipboard Copied! Toggle word wrap Toggle overflow
These commands copy each VM disk from the existing OpenStack environment and into the new Red Hat OpenStack Platform. Snapshots using QCOW will lose their original layering system.
This process migrates all instances from a Compute node. You can now perform maintenance on the node without any instance downtime. To return the Compute node to an enabled state, run the following command:
source ~/overcloudrc
$ source ~/overcloudrc
(overcloud) $ openstack compute service set [hostname] nova-compute --enable
10.4. Running the dynamic inventory script Copier lienLien copié sur presse-papiers!
The director can run Ansible-based automation on your OpenStack Platform environment. The director uses the tripleo-ansible-inventory command to generate a dynamic inventory of nodes in your environment.
Procedure
To view a dynamic inventory of nodes, run the
tripleo-ansible-inventorycommand after sourcingstackrc:source ~/stackrc
$ source ~/stackrc (undercloud) $ tripleo-ansible-inventory --listCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
--listoption returns details about all hosts. This command outputs the dynamic inventory in a JSON format:{"overcloud": {"children": ["controller", "compute"], "vars": {"ansible_ssh_user": "heat-admin"}}, "controller": ["192.168.24.2"], "undercloud": {"hosts": ["localhost"], "vars": {"overcloud_horizon_url": "http://192.168.24.4:80/dashboard", "overcloud_admin_password": "abcdefghijklm12345678", "ansible_connection": "local"}}, "compute": ["192.168.24.3"]}{"overcloud": {"children": ["controller", "compute"], "vars": {"ansible_ssh_user": "heat-admin"}}, "controller": ["192.168.24.2"], "undercloud": {"hosts": ["localhost"], "vars": {"overcloud_horizon_url": "http://192.168.24.4:80/dashboard", "overcloud_admin_password": "abcdefghijklm12345678", "ansible_connection": "local"}}, "compute": ["192.168.24.3"]}Copy to Clipboard Copied! Toggle word wrap Toggle overflow To execute Ansible playbooks on your environment, run the
ansiblecommand and include the full path of the dynamic inventory tool using the-ioption. For example:(undercloud) $ ansible [HOSTS] -i /bin/tripleo-ansible-inventory [OTHER OPTIONS]
(undercloud) $ ansible [HOSTS] -i /bin/tripleo-ansible-inventory [OTHER OPTIONS]Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
[HOSTS]with the type of hosts to use. For example:-
controllerfor all Controller nodes -
computefor all Compute nodes -
overcloudfor all overcloud child nodes i.e.controllerandcompute -
undercloudfor the undercloud -
"*"for all nodes
-
Replace
[OTHER OPTIONS]with additional Ansible options. Some useful options include:-
--ssh-extra-args='-o StrictHostKeyChecking=no'to bypasses confirmation on host key checking. -
-u [USER]to change the SSH user that executes the Ansible automation. The default SSH user for the overcloud is automatically defined using theansible_ssh_userparameter in the dynamic inventory. The-uoption overrides this parameter. -
-m [MODULE]to use a specific Ansible module. The default iscommand, which executes Linux commands. -
-a [MODULE_ARGS]to define arguments for the chosen module.
-
Custom Ansible automation on the overcloud is not part of the standard overcloud stack. Subsequent execution of the openstack overcloud deploy command might override Ansible-based configuration for OpenStack Platform services on overcloud nodes.
10.5. Removing the overcloud Copier lienLien copié sur presse-papiers!
Delete any existing overcloud:
source ~/stackrc
$ source ~/stackrc
(undercloud) $ openstack overcloud delete overcloud
Confirm the deletion of the overcloud:
(undercloud) $ openstack stack list
(undercloud) $ openstack stack list
Deletion takes a few minutes.
Once the removal completes, follow the standard steps in the deployment scenarios to recreate your overcloud.