Deploying an overcloud in a Red Hat OpenShift Container Platform cluster with director Operator
Using director Operator to deploy and manage a Red Hat OpenStack Platform overcloud in a Red Hat OpenShift Container Platform
Abstract
Support for Red Hat OpenStack Platform director Operator will only be granted if your architecture is approved by Red Hat Services or by a Technical Account Manager. Please contact Red Hat before deploying this feature.
Providing feedback on Red Hat documentation Copy linkLink copied to clipboard!
We appreciate your input on our documentation. Tell us how we can make it better.
Use the Create Issue form to provide feedback on the documentation for Red Hat OpenStack Services on OpenShift (RHOSO) or earlier releases of Red Hat OpenStack Platform (RHOSP). When you create an issue for RHOSO or RHOSP documents, the issue is recorded in the RHOSO Jira project, where you can track the progress of your feedback.
To complete the Create Issue form, ensure that you are logged in to Jira. If you do not have a Red Hat Jira account, you can create an account at https://issues.redhat.com.
- Click the following link to open a Create Issue page: Create Issue
- Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form.
- Click Create.
Chapter 1. Creating and deploying a RHOSP overcloud with director Operator Copy linkLink copied to clipboard!
Red Hat OpenShift Container Platform (RHOCP) uses a modular system of Operators to extend the functions of your RHOCP cluster. Red Hat OpenStack Platform (RHOSP) director Operator (OSPdO) adds the ability to install and run a RHOSP cloud within RHOCP. OSPdO manages a set of Custom Resource Definitions (CRDs) that deploy and manage the infrastructure and configuration of RHOSP nodes. The basic architecture of an OSPdO-deployed RHOSP cloud includes the following features:
- Virtualized control plane
- The Controller nodes are virtual machines (VMs) that OSPdO creates in Red Hat OpenShift Virtualization.
- Bare-metal machine provisioning
- OSPdO uses RHOCP bare-metal machine management to provision the Compute nodes for the RHOSP cloud.
- Networking
- OSPdO configures the underlying networks for RHOSP services.
- Heat and Ansible-based configuration
-
OSPdO stores custom heat configuration in RHOCP and uses the
config-downloadfunctionality in director to convert the configuration into Ansible playbooks. If you change the stored heat configuration, OSPdO automatically regenerates the Ansible playbooks. - CLI client
-
OSPdO creates an
openstackclientpod for users to run RHOSP CLI commands and interact with their RHOSP cloud.
You can use the resources specific to OSPdO to provision your overcloud infrastructure, generate your overcloud configuration, and create an overcloud. To create a RHOSP overcloud with OSPdO, you must complete the following tasks:
- Install OSPdO on an operational RHOCP cluster.
- Create a RHOCP cluster data volume for the base operating system and add authentication details for your remote Git repository.
-
Create the overcloud networks using the
OpenStackNetConfigCRD, including the control plane and any isolated networks. -
Create
ConfigMapsto store any custom heat templates and environment files for your overcloud. - Create a control plane, which includes three virtual machines for Controller nodes and a pod to perform client operations.
- Create bare-metal Compute nodes.
-
Create an
OpenStackConfigGeneratorcustom resource to render Ansible playbooks for overcloud configuration. -
Apply the Ansible playbook configuration to your overcloud nodes by using
openstackdeploy.
1.1. Custom resource definitions for director Operator Copy linkLink copied to clipboard!
The Red Hat OpenStack Platform (RHOSP) director Operator (OSPdO) includes a set of custom resource definitions (CRDs) that you can use to manage overcloud resources.
Use the following command to view a complete list of the OSPdO CRDs:
oc get crd | grep "^openstack"
$ oc get crd | grep "^openstack"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the following command to view the definition for a specific CRD:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the following command to view descriptions of the fields you can use to configure a specific CRD:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
OSPdO includes two types of CRD: hardware provisioning and software configuration.
Hardware Provisioning CRDs
openstacknetattachment(internal)-
Used by the OSPdO to manage the
NodeNetworkConfigurationPolicyandNodeSriovConfigurationPolicyCRDs, which are used to attach networks to virtual machines (VMs). openstacknetconfig-
Use to specify
openstacknetattachmentandopenstacknetCRDs that describe the full network configuration. The set of reserved IP and MAC addresses for each node are reflected in the status. openstackbaremetalset- Use to create sets of bare-metal hosts for specific RHOSP roles, such as "Compute" and "Storage".
openstackcontrolplane-
Use to create the RHOSP control plane and manage associated
openstackvmsetCRs. openstacknet(internal)-
Use to create networks that are used to assign IPs to the
openstackvmsetandopenstackbaremetalsetCRs. openstackipset(internal)- Contains a set of IPs for a given network and role. Used by the OSPdO to manage IP addresses.
openstackprovisionservers- Use to serve custom images for provisioning bare-metal nodes with Metal3.
openstackvmset- Use to create sets of OpenShift Virtualization VMs for a specific RHOSP role, such as "Controller", "Database", or "NetworkController".
Software Configuration CRDs
openstackconfiggenerator-
Use to automatically generate Ansible playbooks for deployment when you scale up or make changes to custom
ConfigMapsfor deployment. openstackconfigversion- Use to represent a set of executable Ansible playbooks.
openstackdeploy-
Use to execute the set of Ansible playbooks defined in the
openstackconfigversionCR. openstackclient- Creates a pod used to run RHOSP deployment commands.
1.2. CRD naming conventions Copy linkLink copied to clipboard!
Each custom resource definition (CRD) can have multiple names defined with the spec.names parameter. Which name you use depends on the context of the action you perform:
Use
kindwhen you create and interact with resource manifests:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackBaremetalSet ...
apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackBaremetalSet ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
kindname in the resource manifest correlates to thekindname in the respective CRD.Use
pluralwhen you interact with multiple resources:oc get openstackbaremetalsets
$ oc get openstackbaremetalsetsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use
singularwhen you interact with a single resource:oc describe openstackbaremetalset/compute
$ oc describe openstackbaremetalset/computeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use
shortNamefor any CLI interactions:oc get osbmset
$ oc get osbmsetCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.3. Features not supported by director Operator Copy linkLink copied to clipboard!
- Fiber Channel back end
-
Block Storage (cinder) image-to-volume is not supported for back ends that use Fiber Channel. Red Hat OpenShift Virtualization does not support N_Port ID Virtualization (NPIV). Therefore, Block Storage drivers that need to map LUNs from a storage back end to the controllers, where
cinder-volumeruns by default, do not work. You must create a dedicated role forcinder-volumeand use the role to create physical nodes instead of including it on the virtualized controllers. For more information, see Composable services and custom roles in the Customizing your Red Hat OpenStack Platform deployment guide. - Role-based Ansible playbooks
-
Director Operator (OSPdO) does not support running Ansible playbooks to configure role-based node attributes after the bare-metal nodes are provisioned. This means that you cannot use the
role_growvols_argsextra Ansible variable to configure whole disk partitions for the Object Storage service (swift). Role-based Ansible playbook configuration only applies to bare-metal nodes that are provisioned by using a node definition file. - Migration of workloads from Red Hat Virtualization to OSPdO
- You cannot migrate workloads from a Red Hat Virtualization environment to an OSPdO environment.
- Using a VLAN for the control plane network
-
TripleO does not support using a VLAN for the control plane (
ctlplane) network. - Multiple Compute cells
- You cannot add additional Compute cells to an OSPdO environment.
- BGP for the control plane
- BGP is not supported for the control plane in an OSPdO environment.
- PCI passthrough and attaching hardware devices to Controller VMs
- You cannot attach SRIOV devices and FC SAN Storage to Controller VMs.
1.4. Limitations with a director Operator deployment Copy linkLink copied to clipboard!
A director Operator (OSPdO) environment has the following support limitations:
-
Single-stack IPv6 is not supported. Only IPv4 is supported on the
ctlplanenetwork. - You cannot create VLAN provider networks without dedicated networker nodes, because the NMState Operator cannot attach a VLAN trunk to the OSPdO Controller VMs. Therefore, to create VLAN provider networks, you must create dedicated Networker nodes on bare metal. For more information, see https://github.com/openstack/tripleo-heat-templates/blob/stable/wallaby/roles/Networker.yaml.
- You cannot remove the provisioning network.
- You cannot use a proxy for SSH connections to communicate with the Git repository.
- You cannot use HTTP or HTTPS to connect to the Git repository.
- You cannot customize the hostname of overcloud nodes.
1.5. Recommendations for a director Operator deployment Copy linkLink copied to clipboard!
- Storage class
- For back end performance, use low latency SSD/NVMe-backed storage to create the RWX/RWO storage class required by the Controller virtual machines (VMs), the client pod, and images.
1.6. Additional resources Copy linkLink copied to clipboard!
Chapter 2. Installing and preparing director Operator Copy linkLink copied to clipboard!
You install Red Hat OpenStack Platform (RHOSP) director Operator (OSPdO) on an existing operational Red Hat Openshift Container Platform (RHOCP) cluster. You perform the OSPdO installation tasks and all overcloud creation tasks on a workstation that has access to the RHOCP cluster. After you have installed OSPdO, you must create a data volume for the base operating system and add authentication details for your remote Git repository. You can also set the root password for your nodes. If you do not set a root password, you can still log into nodes with the SSH keys defined in the osp-controlplane-ssh-keys Secret.
Support for Red Hat OpenStack Platform director Operator will only be granted if your architecture is approved by Red Hat Services or by a Technical Account Manager. Please contact Red Hat before deploying this feature.
2.1. Prerequisites Copy linkLink copied to clipboard!
An operational Red Hat Openshift Container Platform (RHOCP) cluster, version 4.12, 4.14, or 4.16. The cluster must contain a
provisioningnetwork, and the following Operators:-
A
baremetalcluster Operator. Thebaremetalcluster Operator must be enabled. For more information onbaremetalcluster Operators, see Bare-metal cluster Operators. - OpenShift Virtualization Operator. For more information on installing the OpenShift Virtualization Operator, see Installing OpenShift Virtualization using the web console.
- SR-IOV Network Operator.
Kubernetes NMState Operator. You must also create an NMState instance to finish installing all the NMState CRDs:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more information on installing the Kubernetes NMState Operator, see Installing the Kubernetes NMState Operator.
-
A
-
The
occommand line tool is installed on your workstation. - A remote Git repository for OSPdO to store the generated configuration for your overcloud.
- An SSH key pair is generated for the Git repository and the public key is uploaded to the Git repository.
The following persistent volumes to fulfill the persistent volume claims that OSPdO creates:
-
4G for
openstackclient-cloud-admin. -
1G for
openstackclient-hosts. - 500G for the base image that OSPdO clones for each Controller virtual machine.
- A minimum of 50G for each Controller virtual machine. For more information, see Controller node requirements
-
4G for
2.2. Bare-metal cluster Operators Copy linkLink copied to clipboard!
Red Hat Openshift Container Platform (RHOCP) clusters that you install with the installer-provisioned infrastructure (IPI) or assisted installation (AI) use the baremetal platform type and have the baremetal cluster Operator enabled. RHOCP clusters that you install with user-provisioned infrastructure (UPI) use the none platform type and might have the baremetal cluster Operator disabled.
If the cluster is of type AI or IPI, it uses metal3, a Kubernetes API for the management of bare-metal hosts. It maintains an inventory of available hosts as instances of the BareMetalHost custom resource definition (CRD). You can use the bare-metal Operator to perform the following tasks:
-
Inspect the host’s hardware details and report them to the corresponding
BareMetalHostCR. This includes information about CPUs, RAM, disks, and NICs. - Provision hosts with a specific image.
- Clean a host’s disk contents before or after provisioning.
To check if the baremetal cluster Operator is enabled, navigate to Administration > Cluster Settings > ClusterOperators > baremetal, scroll to the Conditions section, and view the Disabled status.
To check the platform type of the RHOCP cluster, navigate to Administration > Cluster Settings > Configuration > Infrastructure, switch to YAML view, scroll to the Conditions section, and view the status.platformStatus value.
2.3. Installing director Operator Copy linkLink copied to clipboard!
To install director Operator (OSPdO), you must create the openstack project (namespace) for OSPdO and create the following custom resources (CRs) within the project:
-
A
CatalogSource, which identifies the index image to use for the OSPdO catalog. -
An
OperatorGroup, which defines the Operator group for OSPdO and restricts OSPdO to a target namespace. -
A
Subscription, which tracks changes in the OSPdO catalog.
Procedure
Create the OSPdO project:
oc new-project openstack
$ oc new-project openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Obtain the latest
osp-director-operator-bundleimage from https://catalog.redhat.com/software/containers/search. -
Download the Operator Package Manager (
opm) tool from https://console.redhat.com/openshift/downloads. Use the
opmtool to create an index image:BUNDLE_IMG="registry.redhat.io/rhosp-rhel9/osp-director-operator-bundle:1.3.1" INDEX_IMG="quay.io/<account>/osp-director-operator-index:x.y.z-a" opm index add --bundles ${BUNDLE_IMG} --tag ${INDEX_IMG} -u podman --pull-tool podman$ BUNDLE_IMG="registry.redhat.io/rhosp-rhel9/osp-director-operator-bundle:1.3.1" $ INDEX_IMG="quay.io/<account>/osp-director-operator-index:x.y.z-a" $ opm index add --bundles ${BUNDLE_IMG} --tag ${INDEX_IMG} -u podman --pull-tool podmanCopy to Clipboard Copied! Toggle word wrap Toggle overflow Push the index image to your registry:
podman push ${INDEX_IMG}$ podman push ${INDEX_IMG}Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Create an environment file to configure the
CatalogSource,OperatorGroup, andSubscriptionCRs required to install OSPdO, for example,osp-director-operator.yaml. To configure the
CatalogSourceCR, add the following configuration toosp-director-operator.yaml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow For information about how to apply the Quay authentication so that the Operator deployment can pull the image, see Accessing images for Operators from private registries.
To configure the
OperatorGroupCR, add the following configuration toosp-director-operator.yaml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow To configure the
SubscriptionCR, add the following configuration toosp-director-operator.yaml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the new
CatalogSource,OperatorGroup, andSubscriptionCRs within theopenstacknamespace:oc apply -f osp-director-operator.yaml
$ oc apply -f osp-director-operator.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that you have installed OSPdO,
osp-director-operator.openstack, by listing the installed operators:oc get operators
$ oc get operators NAME AGE osp-director-operator.openstack 5mCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.4. Creating a data volume for the base operating system Copy linkLink copied to clipboard!
You must create a data volume with the Red Hat OpenShift Container Platform (RHOCP) cluster to store the base operating system image for your Controller virtual machines (VMs). You use the baseImageVolumeName parameter to specify this data volume when you create the OpenStackControlPlane and OpenStackVmSet custom resources.
Prerequisites
The
virtctlclient tool is installed on your workstation. To install this tool on a Red Hat Enterprise Linux (RHEL) workstation, use the following commands:sudo subscription-manager repos --enable=cnv-4.12-for-rhel-8-x86_64-rpms sudo dnf install -y kubevirt-virtctl
$ sudo subscription-manager repos --enable=cnv-4.12-for-rhel-8-x86_64-rpms $ sudo dnf install -y kubevirt-virtctlCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
virt-customizeclient tool is installed on your workstation. To install this tool on a RHEL workstation, use the following command:dnf install -y libguestfs-tools-c
$ dnf install -y libguestfs-tools-cCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
- Download a RHEL 9.2 QCOW2 image from the Product Download section of the Red Hat Customer Portal to your workstation.
Optional: Add a custom CA certificate:
sudo -s export LIBGUESTFS_BACKEND=direct virt-copy-in -a <local_path_to_image> <ca_certificate>.pem /etc/pki/ca-trust/source/anchors/
$ sudo -s $ export LIBGUESTFS_BACKEND=direct $ virt-copy-in -a <local_path_to_image> <ca_certificate>.pem /etc/pki/ca-trust/source/anchors/Copy to Clipboard Copied! Toggle word wrap Toggle overflow You might want to add a custom CA certificate to secure LDAP communication for the Identity service, or to communicate with any non-RHOSP system.
Create a script to customize the image to assign predictable network interface names:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the image customization script:
sudo -s export LIBGUESTFS_BACKEND=direct chmod 755 customize_image.sh virt-customize -a <local_path_to_image> --run customize_image.sh --truncate /etc/machine-id
$ sudo -s $ export LIBGUESTFS_BACKEND=direct $ chmod 755 customize_image.sh $ virt-customize -a <local_path_to_image> --run customize_image.sh --truncate /etc/machine-idCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use
virtctlto upload the image to OpenShift Virtualization:virtctl image-upload dv <datavolume_name> -n openstack \ --size=<size> --image-path=<local_path_to_image> \ --storage-class <storage_class> --access-mode <access_mode> --insecure
$ virtctl image-upload dv <datavolume_name> -n openstack \ --size=<size> --image-path=<local_path_to_image> \ --storage-class <storage_class> --access-mode <access_mode> --insecureCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<datavolume_name>with the name of the data volume, for example,openstack-base-img. -
Replace
<size>with the size of the data volume required for your environment, for example,500Gi. The minimum size is 500GB. Replace
<storage_class>with the required storage class from your cluster. Use the following command to retrieve the available storage classes:oc get storageclass
$ oc get storageclassCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<access_mode>with the access mode for the PVC. The default value isReadWriteOnce.
-
Replace
2.5. Adding authentication details for your remote Git repository Copy linkLink copied to clipboard!
Director Operator (OSPdO) stores rendered Ansible playbooks to a remote Git repository and uses this repository to track changes to the overcloud configuration. You can use any Git repository that supports SSH authentication. You must provide details for the Git repository as a Red Hat OpenShift Platform (RHOCP) Secret resource named git-secret.
Prerequisites
- The private key of the SSH key pair for your OSPdO Git repository.
Procedure
Create the
git-secretSecret resource:oc create secret generic <secret_name> -n <namespace> \ --from-file=git_ssh_identity=<path_to_private_SSH_key> \ --from-literal=git_url=<git_server_URL>
$ oc create secret generic <secret_name> -n <namespace> \ --from-file=git_ssh_identity=<path_to_private_SSH_key> \ --from-literal=git_url=<git_server_URL>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<secret_name>with the name of the secret, in this case,git-secret. -
Replace
<namespace>with the name of the namespace to create the secret in, for example,openstack. -
Replace
<path_to_private_SSH_key>with the path to the private key to access the Git repository. -
Replace
<git_server_URL>with the SSH URL of the git repository that stores the OSPdO configuration, for example,ssh://<user>@<server>:2202/repo.git.
-
Replace
Verify that the Secret resource is created:
oc get secret/git-secret -n openstack
$ oc get secret/git-secret -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
2.6. Setting the root password for nodes Copy linkLink copied to clipboard!
To access the root user with a password on each node, you can set a root password in a Secret resource named userpassword. Setting the root password for nodes is optional. If you do not set a root password, you can still log into nodes with the SSH keys defined in the osp-controlplane-ssh-keys Secret.
If you set the root password, you must use the passwordSecret parameter to specify the name of this Secret resource when you create OpenStackControlPlane and OpenStackBaremetalSet custom resources. The examples in this guide use the Secret resource name userpassword.
Procedure
Convert your chosen password to a base64 value:
echo -n "p@ssw0rd!" | base64
$ echo -n "p@ssw0rd!" | base64 cEBzc3cwcmQhCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantThe
-noption removes the trailing newline from the echo output.Create a file named
openstack-userpassword.yamlon your workstation. Include the following resource specification for the Secret in the file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<secret_name>with the name of this Secret resource, for example,userpassword. -
Replace
<password>with your base64 encoded password.
-
Replace
Create the
userpasswordSecret:oc create -f openstack-userpassword.yaml -n openstack
$ oc create -f openstack-userpassword.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
Chapter 3. Creating networks with director Operator Copy linkLink copied to clipboard!
To create networks and bridges on OpenShift Virtualization worker nodes and connect your virtual machines (VMs) to these networks, you define your OpenStackNetConfig custom resource (CR) and specify all the subnets for the overcloud networks. You must create one control plane network for your overcloud. You can also optionally create additional networks to implement network isolation for your composable networks.
3.1. Creating an overcloud network with the OpenStackNetConfig CRD Copy linkLink copied to clipboard!
You must use the OpenStackNetConfig CRD to define at least one control plane network for your overcloud. You can also optionally define VLAN networks to create network isolation for composable networks such as InternalAPI, Storage, and External. Each network definition must include the IP address assignment, and the mapping information for the OpenStackNetAttachment CRD. OpenShift Virtualization uses the network definition to attach any virtual machines (VMs) to the control plane and VLAN networks.
Use the following commands to view the OpenStackNetConfig CRD definition and specification schema:
oc describe crd openstacknetconfig oc explain openstacknetconfig.spec
$ oc describe crd openstacknetconfig
$ oc explain openstacknetconfig.spec
Procedure
-
Create a file named
openstacknetconfig.yamlon your workstation. Add the following configuration to
openstacknetconfig.yamlto create theOpenStackNetConfigcustom resource (CR):apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackNetConfig metadata: name: openstacknetconfig
apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackNetConfig metadata: name: openstacknetconfigCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure network attachment definitions for the bridges you require for your network. For example, add the following configuration to
openstacknetconfig.yamlto create the RHOSP bridge network attachment definitionbr-osp, and set thenodeNetworkConfigurationPolicyoption to create a Linux bridge:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: To use Jumbo Frames for a bridge, configure the bridge interface to use Jumbo Frames and update the value of
mtufor the bridge:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Define each overcloud network. The following example creates a control plane network and an isolated network for
InternalAPItraffic:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the network, for example,
Control. - 2
- The lowercase version of the network name, for example,
ctlplane. - 3
- The subnet specifications.
- 4
- The name of the subnet, for example,
ctlplane. - 5
- Details of the IPv4 subnet with
allocationStart,allocationEnd,cidr,gateway, and an optional list of routes withdestinationandnexthop. - 6
- The network attachment definition to connect the network to. In this example, the RHOSP bridge,
br-osp, is connected to a NIC on each worker. - 7
- The network definition for a composable network. To use the default RHOSP networks, you must create an
OpenStackNetConfigresource for each network. For information on the default RHOSP networks, see Default Red Hat OpenStack Platform networks. To use different networks, you must create a customnetwork_data.yamlfile. For information on creating a customnetwork_data.yamlfile, see Configuring overcloud networking. - 8
- The network VLAN. For information on the default RHOSP networks, see Default Red Hat OpenStack Platform networks. For more information on virtual machine bridging with the
OpenStackNetConfigCRD, see Understanding virtual machine bridging with the OpenStackNetConfig CRD.
Optional: Reserve static IP addresses for networks on specific nodes:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteReservations have precedence over any autogenerated IP addresses.
-
Save the
openstacknetconfig.yamldefinition file. Create the overcloud network:
oc create -f openstacknetconfig.yaml -n openstack
$ oc create -f openstacknetconfig.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow To verify that the overcloud network is created, view the resources for the overcloud network:
oc get openstacknetconfig/openstacknetconfig
$ oc get openstacknetconfig/openstacknetconfigCopy to Clipboard Copied! Toggle word wrap Toggle overflow View the
OpenStackNetConfigAPI and child resources:oc get openstacknetconfig/openstacknetconfig -n openstack oc get openstacknetattachment -n openstack oc get openstacknet -n openstack
$ oc get openstacknetconfig/openstacknetconfig -n openstack $ oc get openstacknetattachment -n openstack $ oc get openstacknet -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you see errors, check the underlying
network-attach-definitionand node network configuration policies:oc get network-attachment-definitions -n openstack oc get nncp
$ oc get network-attachment-definitions -n openstack $ oc get nncpCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.1.1. Default Red Hat OpenStack Platform networks Copy linkLink copied to clipboard!
| Network | VLAN | CIDR | Allocation |
|---|---|---|---|
| External | 10 | 10.0.0.0/24 | 10.0.0.10 - 10.0.0.250 |
| InternalApi | 20 | 172.17.0.0/24 | 172.17.0.10 - 172.17.0.250 |
| Storage | 30 | 172.18.0.0/24 | 172.18.0.10 - 172.18.0.250 |
| StorageMgmt | 40 | 172.19.0.0/24 | 172.19.0.10 - 172.19.250 |
| Tenant | 50 | 172.20.0.0/24 | 172.20.0.10 - 172.20.0.250 |
3.2. Understanding virtual machine bridging with the OpenStackNetConfig CRD Copy linkLink copied to clipboard!
When you create virtual machines (VMs) with the OpenStackVMSet CRD, you must connect these VMs to the relevant Red Hat OpenStack Platform (RHOSP) networks. You can use the OpenStackNetConfig CRD to create the required bridges on the Red Hat OpenShift Container Platform (RHOCP) worker nodes and connect your Controller VMs to your RHOSP overcloud networks. RHOSP requires dedicated NICs to deploy.
The OpenStackNetConfig CRD includes an attachConfigurations option, which is a hash of nodeNetworkConfigurationPolicy. Each specified attachConfiguration in an OpenStackNetConfig custom resource (CR) creates a NetworkAttachmentDefinition object, which passes network interface data to the NodeNetworkConfigurationPolicy resource in the RHOCP cluster. The NodeNetworkConfigurationPolicy resource uses the nmstate API to configure the end state of the network configuration on each RHOCP worker node. The NetworkAttachmentDefinition object for each network defines the Multus CNI plugin configuration. When you specify the VLAN ID for the NetworkAttachmentDefinition object, the Multus CNI plugin enables vlan-filtering on the bridge. Each network configured in the OpenStackNetConfig CR references one of the attachConfigurations. Inside the VMs, there is one interface for each network.
The following example creates a br-osp attachConfiguration, and configures the nodeNetworkConfigurationPolicy option to create a Linux bridge and connect the bridge to a NIC on each worker. When you apply this configuration, the NodeNetworkConfigurationPolicy object configures each RHOCP worker node to match the required end state: each worker contains a new bridge named br-osp, which is connected to the enp6s0 NIC on each host. All RHOSP Controller VMs can connect to the br-osp bridge for control plane network traffic.
If you specify an Internal API network through VLAN 20, you can set the attachConfiguration option to modify the networking configuration on each RHOCP worker node and connect the VLAN to the existing br-osp bridge:
The br-osp already exists and is connected to the enp6s0 NIC on each host, so no change occurs to the bridge itself. However, the InternalAPI OpenStackNet associates VLAN 20 to this network, which means RHOSP Controller VMs can connect to the VLAN 20 on the br-osp bridge for Internal API network traffic.
When you create VMs with the OpenStackVMSet CRD, the VMs use multiple Virtio devices connected to each network. OpenShift Virtualization sorts the network names in alphabetical order except for the default network, which is the RHOCP cluster network that all the pods are connected to. The default network is always the first interface. You cannot configure the default network. For example, if you create the default RHOSP networks with OpenStackNetConfig, the following interface configuration is generated for Controller VMs:
This configuration results in the following network-to-interface mapping for Controller nodes:
| Network | Interface |
|---|---|
| default | nic1 |
| ctlplane | nic2 |
| external | nic3 |
| internalapi | nic4 |
| storage | nic5 |
| storagemgmt | nic6 |
| tenant | nic7 |
The role NIC template used by OpenStackVMSet is auto generated. You can overwrite the default configuration in a custom NIC template file, <role>-nic-template.j2, for example, controller-nic-template.j2. You must add your custom NIC file to the tarball file that contains your overcloud configuration, which is implemented by using an OpenShift ConfigMap object. For more information, see Chapter 4. Customizing the overcloud with director Operator.
3.3. Example OpenStackNetConfig custom resource file Copy linkLink copied to clipboard!
The following example OpenStackNetConfig custom resource (CR) file defines an overcloud network which includes a control plane network and isolated VLAN networks for a default RHOSP deployment. The example also reserves static IP addresses for networks on specific nodes.
Chapter 4. Customizing the overcloud with director Operator Copy linkLink copied to clipboard!
You can customize your overcloud or enable certain features by creating heat templates and environment files that you include with your overcloud deployment. With a director Operator (OSPdO) overcloud deployment, you store these files in ConfigMap objects before running the overcloud deployment.
4.1. Adding custom templates to the overcloud configuration Copy linkLink copied to clipboard!
Director Operator (OSPdO) converts a core set of overcloud heat templates into Ansible playbooks that you apply to provisioned nodes when you are ready to configure the Red Hat OpenStack Platform (RHOSP) software on each node. To add your own custom heat templates and custom roles file into the overcloud deployment, you must archive the template files into a tarball file and include the binary contents of the tarball file in an OpenShift ConfigMap object named tripleo-tarball-config. This tarball file can contain complex directory structures to extend the core set of templates. OSPdO extracts the files and directories from the tarball file into the same directory as the core set of heat templates. If any of your custom templates have the same name as a template in the core collection, the custom template overrides the core template.
All references in the environment files must be relative to the TripleO heat templates where the tarball is extracted.
Prerequisites
- The custom overcloud templates that you want to apply to provisioned nodes.
Procedure
Navigate to the location of your custom templates:
cd ~/custom_templates
$ cd ~/custom_templatesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Archive the templates into a gzipped tarball:
tar -cvzf custom-config.tar.gz *.yaml
$ tar -cvzf custom-config.tar.gz *.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
tripleo-tarball-config ConfigMapCR and use the tarball as data:oc create configmap tripleo-tarball-config --from-file=custom-config.tar.gz -n openstack
$ oc create configmap tripleo-tarball-config --from-file=custom-config.tar.gz -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
ConfigMapCR is created:oc get configmap/tripleo-tarball-config -n openstack
$ oc get configmap/tripleo-tarball-config -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2. Adding custom environment files to the overcloud configuration Copy linkLink copied to clipboard!
To enable features or set parameters in the overcloud, you must include environment files with your overcloud deployment. Director Operator (OSPdO) uses a ConfigMap object named heat-env-config to store and retrieve environment files. The ConfigMap object stores the environment files in the following format:
...
data:
<environment_file_name>: |+
<environment_file_contents>
...
data:
<environment_file_name>: |+
<environment_file_contents>
For example, the following ConfigMap contains two environment files:
Upload a set of custom environment files from a directory to a ConfigMap object that you can include as a part of your overcloud deployment.
Prerequisites
- The custom environment files for your overcloud deployment.
Procedure
Create the
heat-env-config ConfigMapobject:oc create configmap -n openstack heat-env-config \ --from-file=~/<dir_custom_environment_files>/ \ --dry-run=client -o yaml | oc apply -f -
$ oc create configmap -n openstack heat-env-config \ --from-file=~/<dir_custom_environment_files>/ \ --dry-run=client -o yaml | oc apply -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<dir_custom_environment_files>with the directory that contains the environment files you want to use in your overcloud deployment. TheConfigMapobject stores these as individualdataentries.
-
Replace
Verify that the
heat-env-config ConfigMapobject contains all the required environment files:oc get configmap/heat-env-config -n openstack
$ oc get configmap/heat-env-config -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3. Additional resources Copy linkLink copied to clipboard!
Chapter 5. Creating overcloud nodes with director Operator Copy linkLink copied to clipboard!
A Red Hat OpenStack Platform (RHOSP) overcloud consists of multiple nodes, such as Controller nodes to provide control plane services and Compute nodes to provide computing resources. For a functional overcloud with high availability, you must have 3 Controller nodes and at least one Compute node. You can create Controller nodes with the OpenStackControlPlane Custom Resource Definition (CRD) and Compute nodes with the OpenStackBaremetalSet CRD.
Red Hat OpenShift Container Platform (RHOCP) does not autodiscover issues on RHOCP worker nodes, or perform autorecovery of worker nodes that host RHOSP Controller VMs if the worker node fails or has an issue. You must enable health checks on your RHOCP cluster to automatically relocate Controller VM pods when a host worker node fails. For information on how to autodiscover issues on RHOCP worker nodes, see Deploying machine health checks.
5.1. Creating a control plane with the OpenStackControlPlane CRD Copy linkLink copied to clipboard!
The Red Hat OpenStack Platform (RHOSP) control plane contains the RHOSP services that manage the overcloud. The default control plane consists of 3 Controller nodes. You can use composable roles to manage services on dedicated controller virtual machines (VMs). For more information on composable roles, see Composable services and custom roles.
Define an OpenStackControlPlane custom resource (CR) to create the Controller nodes as OpenShift Virtualization virtual machines (VMs).
Use the following commands to view the OpenStackControlPlane CRD definition and specification schema:
oc describe crd openstackcontrolplane oc explain openstackcontrolplane.spec
$ oc describe crd openstackcontrolplane
$ oc explain openstackcontrolplane.spec
Prerequisites
-
You have used the
OpenStackNetConfigCR to create a control plane network and any additional isolated networks.
Procedure
Create a file named
openstack-controller.yamlon your workstation. Include the resource specification for the Controller nodes. The following example defines a specification for a control plane that consists of 3 Controller nodes:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the overcloud control plane, for example,
overcloud. - 2
- The OSPdO namespace, for example,
openstack. - 3
- The configuration for the control plane.
- 4
- Optional: The
Secretresource that provides root access on each node to users with the password. - 5
- The name of the data volume that stores the base operating system image for your Controller VMs. For more information on creating the data volume, see Creating a data volume for the base operating system.
- 6
- For information on configuring Red Hat OpenShift Container Platform (RHOCP) storage, see Dynamic provisioning.
-
Save the
openstack-controller.yamlfile. Create the control plane:
oc create -f openstack-controller.yaml -n openstack
$ oc create -f openstack-controller.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Wait until RHOCP creates the resources related to
OpenStackControlPlaneCR. OSPdO also creates anOpenStackClientpod that you can access through a remote shell to run RHOSP commands.
Verification
View the resource for the control plane:
oc get openstackcontrolplane/overcloud -n openstack
$ oc get openstackcontrolplane/overcloud -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow View the
OpenStackVMSetresources to verify the creation of the control plane VM set:oc get openstackvmsets -n openstack
$ oc get openstackvmsets -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow View the VMs to verify the creation of the control plane OpenShift Virtualization VMs:
oc get virtualmachines -n openstack
$ oc get virtualmachines -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow Test access to the
openstackclientremote shell:oc rsh -n openstack openstackclient
$ oc rsh -n openstack openstackclientCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.2. Creating a provisioning server with the OpenStackProvisionServer CRD Copy linkLink copied to clipboard!
Provisioning servers provide a specific Red Hat Enterprise Linux (RHEL) QCOW2 image for provisioning Compute nodes for the Red Hat OpenStack Platform (RHOSP). An OpenStackProvisionServer CR is automatically created for any OpenStackBaremetalSet CRs you create. You can create the OpenStackProvisionServer CR manually and provide the name to any OpenStackBaremetalSet CRs that you create.
The OpenStackProvisionServer CRD creates an Apache server on the Red Hat OpenShift Container Platform (RHOCP) provisioning network for a specific RHEL QCOW2 image.
Procedure
Create a file named
openstack-provision.yamlon your workstation. Include the resource specification for the Provisioning server. The following example defines a specification for a Provisioning server using a specific RHEL 9.2 QCOW2 images:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name that identifies the
OpenStackProvisionServerCR. - 2
- The OSPdO namespace, for example,
openstack. - 3
- The initial source of the RHEL QCOW2 image for the Provisioning server. The image is downloaded from this remote source when the server is created.
- 4
- The Provisioning server port, set to 8080 by default. You can change it for a specific port configuration.
For further descriptions of the values you can use to configure your
OpenStackProvisionServerCR, view theOpenStackProvisionServerCRD specification schema:oc describe crd openstackprovisionserver
$ oc describe crd openstackprovisionserverCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Save the
openstack-provision.yamlfile. Create the Provisioning Server:
oc create -f openstack-provision.yaml -n openstack
$ oc create -f openstack-provision.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the resource for the Provisioning server is created:
oc get openstackprovisionserver/openstack-provision-server -n openstack
$ oc get openstackprovisionserver/openstack-provision-server -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3. Creating Compute nodes with the OpenStackBaremetalSet CRD Copy linkLink copied to clipboard!
Compute nodes provide computing resources to your Red Hat OpenStack Platform (RHOSP) environment. You must have at least one Compute node in your overcloud and you can scale the number of Compute nodes after deployment.
Define an OpenStackBaremetalSet custom resource (CR) to create Compute nodes from bare-metal machines that the Red Hat OpenShift Container Platform (RHOCP) manages.
Use the following commands to view the OpenStackBareMetalSet CRD definition and specification schema:
oc describe crd openstackbaremetalset oc explain openstackbaremetalset.spec
$ oc describe crd openstackbaremetalset
$ oc explain openstackbaremetalset.spec
Prerequisites
-
You have used the
OpenStackNetConfigCR to create a control plane network and any additional isolated networks. -
You have created a control plane with the
OpenStackControlPlaneCRD. -
You have created a
BareMetalHostCR for each bare-metal node that you want to add as a Compute node to the overcloud. For information about how to create aBareMetalHostCR, see About the BareMetalHost resource in the Red Hat OpenShift Container Platform (RHOCP) Postinstallation configuration guide.
Procedure
Create a file named
openstack-compute.yamlon your workstation. Include the resource specification for the Compute nodes. The following example defines a specification for 1 Compute node:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Save the
openstack-compute.yamlfile. Create the Compute nodes:
oc create -f openstack-compute.yaml -n openstack
$ oc create -f openstack-compute.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
View the resource for the Compute nodes:
oc get openstackbaremetalset/compute -n openstack
$ oc get openstackbaremetalset/compute -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow View the bare-metal machines that RHOCP manages to verify the creation of the Compute nodes:
oc get baremetalhosts -n openshift-machine-api
$ oc get baremetalhosts -n openshift-machine-apiCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 6. Configuring and deploying the overcloud with director Operator Copy linkLink copied to clipboard!
You can configure your overcloud nodes after you have provisioned virtual and bare-metal nodes for your overcloud. You must create an OpenStackConfigGenerator resource to generate your Ansible playbooks, register your nodes to either the Red Hat Customer Portal or Red Hat Satellite, and then create an OpenStackDeploy resource to apply the configuration to your nodes.
6.1. Creating Ansible playbooks for overcloud configuration with the OpenStackConfigGenerator CRD Copy linkLink copied to clipboard!
After you provision the overcloud infrastructure, you must create a set of Ansible playbooks to configure Red Hat OpenStack Platform (RHOSP) on the overcloud nodes. You use the OpenStackConfigGenerator custom resource definition (CRD) to create these playbooks. The OpenStackConfigGenerator CRD uses the RHOSP director config-download feature to convert heat configuration to playbooks.
Use the following commands to view the OpenStackConfigGenerator CRD definition and specification schema:
oc describe crd openstackconfiggenerator oc explain openstackconfiggenerator.spec
$ oc describe crd openstackconfiggenerator
$ oc explain openstackconfiggenerator.spec
Prerequisites
-
You have created a control plane with the`
OpenStackControlPlaneCRD. -
You have created Compute nodes with the`
OpenStackBarementalSetsCRD. -
You have created a
ConfigMapobject that contains your custom heat templates. -
You have created a
ConfigMapobject that contains your custom environment files. -
You have manually installed the
fence-agents-kubevirtpackage on the Controller VMs.
Procedure
Create a file named
openstack-config-generator.yamlon your workstation. Include the resource specification to generate the Ansible playbooks. The following example defines a specification to generate the playbooks:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the config generator, which is
defaultby default. - 2
- Set to
trueto enable the automatic creation of required heat environment files to enable fencing. Production RHOSP environments must have fencing enabled. Virtual machines running Pacemaker require thefence-agents-kubevirtpackage. - 3
- Set to the
ConfigMapobject that contains the Git authentication credentials, by defaultgit-secret. - 4
- The
ConfigMapobject that contains your custom environment files, by defaultheat-env-config. - 5
- A list of the default heat environment files, provided by TripleO in the
tripleo-heat-templates/environmentsdirectory, to use to generate the playbooks. - 6
- The
ConfigMapobject that contains the tarball with your custom heat templates, by defaulttripleo-tarball-config.
Optional: To change the location of the container images the
OpenStackConfigGeneratorCR uses to create the ephemeral heat service, add the following configuration to youropenstack-config-generator.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<heat_api_image_location>with the path to the directory where you host your heat API image,openstack-heat-api. -
Replace
<heat_engine_image_location>with the path to the directory where you host your heat engine image,openstack-heat-engine. -
Replace
<mariadb_image_location>with the path to the directory where you host your MariaDB image,openstack-mariadb. -
Replace
<rabbitmq_image_location>with the path to the directory where you host your RabbitMQ image,openstack-rabbitmq.
-
Replace
Optional: Customize the ephemeral heat instance to override default heat parameter values. For example, include the following configuration to change the default
max_template_size:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: To create the Ansible playbooks for configuration generation in debug mode, add the following configuration to your
openstack-config-generator.yamlfile:spec: ... interactive: true
spec: ... interactive: trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow For more information on debugging an
OpenStackConfigGeneratorpod in interactive mode, see Debugging configuration generation.-
Save the
openstack-config-generator.yamlfile. Create the Ansible config generator:
oc create -f openstack-config-generator.yaml -n openstack
$ oc create -f openstack-config-generator.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the resource for the config generator is created:
oc get openstackconfiggenerator/default -n openstack
$ oc get openstackconfiggenerator/default -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow
6.2. Registering the operating system of your overcloud Copy linkLink copied to clipboard!
Before director Operator (OSPdO) configures the overcloud nodes, you must register the operating system of all nodes to either the Red Hat Customer Portal or Red Hat Satellite Server, and enable repositories for your nodes.
As part of the OpenStackControlPlane CR, OSPdO creates an OpenStackClient pod that you access through a Remote Shell (RSH) to run Red Hat OpenStack Platform (RHOSP) commands. This pod also contains an Ansible inventory script named /home/cloud-admin/ctlplane-ansible-inventory.
To register your nodes, you can use the redhat_subscription Ansible module with the inventory script from the OpenStackClient pod.
Procedure
Open an RSH connection to the
OpenStackClientpod:oc rsh -n openstack openstackclient
$ oc rsh -n openstack openstackclientCopy to Clipboard Copied! Toggle word wrap Toggle overflow Change to the
cloud-adminhome directory:cd /home/cloud-admin
$ cd /home/cloud-adminCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a playbook that uses the
redhat_subscriptionmodules to register your nodes. For example, the following playbook registers Controller nodes:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Register the overcloud nodes to the required repositories:
ansible-playbook -i /home/cloud-admin/ctlplane-ansible-inventory ./rhsm.yaml
$ ansible-playbook -i /home/cloud-admin/ctlplane-ansible-inventory ./rhsm.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
6.3. Applying overcloud configuration with director Operator Copy linkLink copied to clipboard!
You can configure the overcloud with director Operator (OSPdO) only after you have created your control plane, provisioned your bare metal Compute nodes, and generated the Ansible playbooks to configure software on each node. When you create an OpenStackDeploy custom resource (CR), OSPdO creates a job that runs the Ansible playbooks to configure the overcloud.
Use the following commands to view the OpenStackDeploy CRD definition and specification schema:
oc describe crd openstackdeploy oc explain openstackdeploy.spec
$ oc describe crd openstackdeploy
$ oc explain openstackdeploy.spec
Prerequisites
-
You have created a control plane with the
OpenStackControlPlaneCRD. -
You have created Compute nodes with the
OpenStackBarementalSetsCRD. -
You have used the
OpenStackConfigGeneratorCRD to create the Ansible playbook configuration for your overcloud.
Procedure
Retrieve the
hash/digestof the latestOpenStackConfigVersionobject, which represents the Ansible playbooks that should be used to configure the overcloud:oc get -n openstack --sort-by {.metadata.creationTimestamp} openstackconfigversion -o json$ oc get -n openstack --sort-by {.metadata.creationTimestamp} openstackconfigversion -o jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a file named
openstack-deployment.yamlon your workstation and include the resource specification to the Ansible playbooks:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<config_version>with the Ansible playbookshash/digestretrieved in step 1, for example,n5fch96h548h75hf4hbdhb8hfdh676h57bh96h5c5h59hf4h88h....
-
Replace
-
Save the
openstack-deployment.yamlfile. Create the
OpenStackDeployresource:oc create -f openstack-deployment.yaml -n openstack
$ oc create -f openstack-deployment.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow As the deployment runs, it creates a Kubernetes job to execute the Ansible playbooks. You can view the logs of the job to watch the Ansible playbooks running:
oc logs -f jobs/deploy-openstack-default
$ oc logs -f jobs/deploy-openstack-defaultCopy to Clipboard Copied! Toggle word wrap Toggle overflow You can also manually access the executed Ansible playbooks by logging into the
openstackclientpod. You can find the ansible playbooks and theansible.logfile for the current deployment in/home/cloud-admin/work/directory.
6.4. Debugging configuration generation Copy linkLink copied to clipboard!
To debug configuration generation operations, you can set the OpenStackConfigGenerator CR to use interactive mode. In interactive mode, the OpenStackConfigGenerator CR creates the environment to start rendering the playbooks, but does not automatically render the playbooks.
Prerequisites
Your
OpenStackConfigGeneratorCR was created in interactive mode:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
The
OpenStackConfigGeneratorpod with the prefixgenerate-confighas started.
Procedure
Open a Remote Shell (RSH) connection to the
OpenStackConfigGeneratorpod:oc rsh $(oc get pod -o name -l job-name=generate-config-default)
$ oc rsh $(oc get pod -o name -l job-name=generate-config-default)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Inspect the files and playbook rendering:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- 1
- Directory that stores the files auto-rendered by OSPdO.
- 2
- Directory that stores the environment files specified with the
heatEnvConfigMapoption. - 3
- Directory that stores the overcloud service passwords created by OSPdO.
- 4
- Script that renders the Ansible playbooks.
- 5
- Internal script used by
create-playbooksto replicate the undocumented heat client merging of map parameters. - 6
- Directory that stores the tarball specified with the
tarballConfigMapoption.
Chapter 7. Deploying a RHOSP hyperconverged infrastructure (HCI) with director Operator Copy linkLink copied to clipboard!
You can use director Operator (OSPdO) to deploy an overcloud with hyperconverged infrastructure (HCI). An overcloud with HCI colocates Compute and Red Hat Ceph Storage OSD services on the same nodes.
7.1. Prerequisites Copy linkLink copied to clipboard!
- Your Compute HCI nodes require extra disks to use as OSDs.
- You have installed and prepared OSPdO on an operational Red Hat OpenShift Container Platform (RHOCP) cluster. For more information, see Installing and preparing director Operator.
-
You have created the overcloud networks by using the
OpenStackNetConfigcustom resource definition (CRD), including the control plane and any isolated networks. For more information, see Creating networks with director Operator. -
You have created
ConfigMapsto store any custom heat templates and environment files for your overcloud. For more information, see Customizing the overcloud with director Operator. - You have created a control plane and bare-metal Compute nodes for your overcloud. For more information, see Creating overcloud nodes with director Operator.
-
You have created and applied an
OpenStackConfigGeneratorcusstom resource to render Ansible playbooks for overcloud configuration.
7.2. Creating a roles_data.yaml file with the Compute HCI role for director Operator Copy linkLink copied to clipboard!
To include configuration for the Compute HCI role in your overcloud, you must include the Compute HCI role in the roles_data.yaml file that you include with your overcloud deployment.
Ensure that you use roles_data.yaml as the file name.
Procedure
Access the remote shell for
openstackclient:oc rsh -n openstack openstackclient
$ oc rsh -n openstack openstackclientCopy to Clipboard Copied! Toggle word wrap Toggle overflow Unset the
OS_CLOUDenvironment variable:unset OS_CLOUD
$ unset OS_CLOUDCopy to Clipboard Copied! Toggle word wrap Toggle overflow Change to the
cloud-admindirectory:cd /home/cloud-admin/
$ cd /home/cloud-admin/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Generate a new
roles_data.yamlfile with theControllerandComputeHCIroles:openstack overcloud roles generate -o roles_data.yaml Controller ComputeHCI
$ openstack overcloud roles generate -o roles_data.yaml Controller ComputeHCICopy to Clipboard Copied! Toggle word wrap Toggle overflow Exit the
openstackclientpod:exit
$ exitCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the custom
roles_data.yamlfile from theopenstackclientpod to your custom templates directory:oc cp openstackclient:/home/cloud-admin/roles_data.yaml custom_templates/roles_data.yaml -n openstack
$ oc cp openstackclient:/home/cloud-admin/roles_data.yaml custom_templates/roles_data.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow
7.3. Configuring HCI networking in director Operator Copy linkLink copied to clipboard!
Create directories on your workstation to store your custom templates and environment files, and configure the NIC templates for your Compute HCI role.
Procedure
Create a directory for your custom templates:
mkdir custom_templates
$ mkdir custom_templatesCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Create a custom template file named
multiple_nics_vlans_dvr.j2in yourcustom_templatesdirectory. -
Add configuration for the NICs of your bare-metal nodes to your
multiple_nics_vlans_dvr.j2file. For an example NIC configuration file, see Custom NIC heat template for HCI Compute nodes. Create a directory for your custom environment files:
mkdir custom_environment_files
$ mkdir custom_environment_filesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Map the NIC template for your overcloud role in the
network-environment.yamlenvironment file in yourcustom_environment_filesdirectory:parameter_defaults: ComputeHCINetworkConfigTemplate: 'multiple_nics_vlans_dvr.j2'
parameter_defaults: ComputeHCINetworkConfigTemplate: 'multiple_nics_vlans_dvr.j2'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.4. Custom NIC heat template for HCI Compute nodes Copy linkLink copied to clipboard!
The following example is a heat template that contains NIC configuration for the HCI Compute bare metal nodes. The configuration in the heat template maps the networks to the following bridges and interfaces:
| Networks | Bridge | Interface |
|---|---|---|
| Control Plane, Storage, Internal API | N/A |
|
| External, Tenant |
|
|
To use the following template in your deployment, copy the example to multiple_nics_vlans_dvr.j2 in your custom_templates directory on your workstation. You can modify this configuration for the NIC configuration of your bare-metal nodes.
Example
7.5. Adding custom templates to the overcloud configuration Copy linkLink copied to clipboard!
Director Operator (OSPdO) converts a core set of overcloud heat templates into Ansible playbooks that you apply to provisioned nodes when you are ready to configure the Red Hat OpenStack Platform (RHOSP) software on each node. To add your own custom heat templates and custom roles file into the overcloud deployment, you must archive the template files into a tarball file and include the binary contents of the tarball file in an OpenShift ConfigMap object named tripleo-tarball-config. This tarball file can contain complex directory structures to extend the core set of templates. OSPdO extracts the files and directories from the tarball file into the same directory as the core set of heat templates. If any of your custom templates have the same name as a template in the core collection, the custom template overrides the core template.
All references in the environment files must be relative to the TripleO heat templates where the tarball is extracted.
Prerequisites
- The custom overcloud templates that you want to apply to provisioned nodes.
Procedure
Navigate to the location of your custom templates:
cd ~/custom_templates
$ cd ~/custom_templatesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Archive the templates into a gzipped tarball:
tar -cvzf custom-config.tar.gz *.yaml
$ tar -cvzf custom-config.tar.gz *.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
tripleo-tarball-config ConfigMapCR and use the tarball as data:oc create configmap tripleo-tarball-config --from-file=custom-config.tar.gz -n openstack
$ oc create configmap tripleo-tarball-config --from-file=custom-config.tar.gz -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
ConfigMapCR is created:oc get configmap/tripleo-tarball-config -n openstack
$ oc get configmap/tripleo-tarball-config -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow
7.6. Custom environment file for configuring Hyperconverged Infrastructure (HCI) storage in director Operator Copy linkLink copied to clipboard!
The following example is an environment file that contains Red Hat Ceph Storage configuration for the Compute HCI nodes. This configuration maps the OSD nodes to the sdb, sdc, and sdd devices and enables HCI with the is_hci option.
You can modify this configuration to suit the storage configuration of your bare-metal nodes. Use the "Ceph Placement Groups (PGs) per Pool Calculator" to determine the value for the CephPoolDefaultPgNum parameter.
To use this template in your deployment, copy the contents of the example to compute-hci.yaml in your custom_environment_files directory on your workstation.
7.7. Adding custom environment files to the overcloud configuration Copy linkLink copied to clipboard!
To enable features or set parameters in the overcloud, you must include environment files with your overcloud deployment. Director Operator (OSPdO) uses a ConfigMap object named heat-env-config to store and retrieve environment files. The ConfigMap object stores the environment files in the following format:
...
data:
<environment_file_name>: |+
<environment_file_contents>
...
data:
<environment_file_name>: |+
<environment_file_contents>
For example, the following ConfigMap contains two environment files:
Upload a set of custom environment files from a directory to a ConfigMap object that you can include as a part of your overcloud deployment.
Prerequisites
- The custom environment files for your overcloud deployment.
Procedure
Create the
heat-env-config ConfigMapobject:oc create configmap -n openstack heat-env-config \ --from-file=~/<dir_custom_environment_files>/ \ --dry-run=client -o yaml | oc apply -f -
$ oc create configmap -n openstack heat-env-config \ --from-file=~/<dir_custom_environment_files>/ \ --dry-run=client -o yaml | oc apply -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<dir_custom_environment_files>with the directory that contains the environment files you want to use in your overcloud deployment. TheConfigMapobject stores these as individualdataentries.
-
Replace
Verify that the
heat-env-config ConfigMapobject contains all the required environment files:oc get configmap/heat-env-config -n openstack
$ oc get configmap/heat-env-config -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow
7.8. Creating HCI Compute nodes and deploying the overcloud Copy linkLink copied to clipboard!
Compute nodes provide computing resources to your Red Hat OpenStack Platform (RHOSP) environment. You must have at least one Compute node in your overcloud and you can scale the number of Compute nodes after deployment.
Define an OpenStackBaremetalSet custom resource (CR) to create Compute nodes from bare-metal machines that the Red Hat OpenShift Container Platform (RHOCP) manages.
Use the following commands to view the OpenStackBareMetalSet CRD definition and specification schema:
oc describe crd openstackbaremetalset oc explain openstackbaremetalset.spec
$ oc describe crd openstackbaremetalset
$ oc explain openstackbaremetalset.spec
Prerequisites
-
You have used the
OpenStackNetConfigCR to create a control plane network and any additional isolated networks. -
You have created a control plane with the
OpenStackControlPlaneCRD.
Procedure
Create a file named
openstack-hcicompute.yamlon your workstation. Include the resource specification for the HCI Compute nodes. For example, the specification for 3 HCI Compute nodes is as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Save the
openstack-hcicompute.yamlfile. Create the HCI Compute nodes:
oc create -f openstack-hcicompute.yaml -n openstack
$ oc create -f openstack-hcicompute.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the resource for the HCI Compute nodes is created:
oc get openstackbaremetalset/computehci -n openstack
$ oc get openstackbaremetalset/computehci -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow To verify the creation of the HCI Compute nodes, view the bare-metal machines that RHOCP manages:
oc get baremetalhosts -n openshift-machine-api
$ oc get baremetalhosts -n openshift-machine-apiCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Create the Ansible playbooks for overcloud configuration with the
OpenStackConfigGeneratorCRD. For more information, see Creating Ansible playbooks for overcloud configuration with the OpenStackConfigGenerator CRD. - Register the operating system of your overcloud. For more information, see Registering the operating system of your overcloud.
- Apply the overcloud configuration. For more information, see Applying overcloud configuration with director Operator.
Chapter 8. Deploying RHOSP with an external Red Hat Ceph Storage cluster with director Operator Copy linkLink copied to clipboard!
You can use director Operator (OSPdO) to deploy an overcloud that connects to an external Red Hat Ceph Storage cluster.
Prerequisites
- You have an external Red Hat Ceph Storage cluster.
- You have installed and prepared OSPdO on an operational Red Hat OpenShift Container Platform (RHOCP) cluster. For more information, see Installing and preparing director Operator.
-
You have created the overcloud networks by using the
OpenStackNetConfigcustom resource definition (CRD), including the control plane and any isolated networks. For more information, see Creating networks with director Operator. -
You have created
ConfigMapsto store any custom heat templates and environment files for your overcloud. For more information, see Customizing the overcloud with director Operator. - You have created a control plane and bare-metal Compute nodes for your overcloud. For more information, see Creating overcloud nodes with director Operator.
-
You have created and applied an
OpenStackConfigGeneratorcustom resource to render Ansible playbooks for overcloud configuration.
8.1. Configuring networking for the Compute role in director Operator Copy linkLink copied to clipboard!
Create directories on your workstation to store your custom templates and environment files, and configure the NIC templates for your Compute role.
Procedure
Create a directory for your custom templates:
mkdir custom_templates
$ mkdir custom_templatesCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Create a custom template file named
multiple_nics_vlans_dvr.j2in yourcustom_templatesdirectory. -
Add configuration for the NICs of your bare-metal Compute nodes to your
multiple_nics_vlans_dvr.j2file. For an example NIC configuration file, see Custom NIC heat template for Compute nodes. Create a directory for your custom environment files:
mkdir custom_environment_files
$ mkdir custom_environment_filesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Map the NIC template for your overcloud role in the
network-environment.yamlenvironment file in yourcustom_environment_filesdirectory:parameter_defaults: ComputeNetworkConfigTemplate: 'multiple_nics_vlans_dvr.j2'
parameter_defaults: ComputeNetworkConfigTemplate: 'multiple_nics_vlans_dvr.j2'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.2. Custom NIC heat template for Compute nodes Copy linkLink copied to clipboard!
The following example is a heat template that contains NIC configuration for the Compute bare-metal nodes in an overcloud that connects to an external Red Hat Ceph Storage cluster. The configuration in the heat template maps the networks to the following bridges and interfaces:
| Networks | Bridge | interface |
|---|---|---|
| Control Plane, Storage, Internal API | N/A |
|
| External, Tenant |
|
|
To use the following template in your deployment, copy the example to multiple_nics_vlans_dvr.j2 in your custom_templates directory on your workstation. You can modify this configuration for the NIC configuration of your bare-metal nodes.
Example
8.3. Adding custom templates to the overcloud configuration Copy linkLink copied to clipboard!
Director Operator (OSPdO) converts a core set of overcloud heat templates into Ansible playbooks that you apply to provisioned nodes when you are ready to configure the Red Hat OpenStack Platform (RHOSP) software on each node. To add your own custom heat templates and custom roles file into the overcloud deployment, you must archive the template files into a tarball file and include the binary contents of the tarball file in an OpenShift ConfigMap object named tripleo-tarball-config. This tarball file can contain complex directory structures to extend the core set of templates. OSPdO extracts the files and directories from the tarball file into the same directory as the core set of heat templates. If any of your custom templates have the same name as a template in the core collection, the custom template overrides the core template.
All references in the environment files must be relative to the TripleO heat templates where the tarball is extracted.
Prerequisites
- The custom overcloud templates that you want to apply to provisioned nodes.
Procedure
Navigate to the location of your custom templates:
cd ~/custom_templates
$ cd ~/custom_templatesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Archive the templates into a gzipped tarball:
tar -cvzf custom-config.tar.gz *.yaml
$ tar -cvzf custom-config.tar.gz *.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
tripleo-tarball-config ConfigMapCR and use the tarball as data:oc create configmap tripleo-tarball-config --from-file=custom-config.tar.gz -n openstack
$ oc create configmap tripleo-tarball-config --from-file=custom-config.tar.gz -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
ConfigMapCR is created:oc get configmap/tripleo-tarball-config -n openstack
$ oc get configmap/tripleo-tarball-config -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.4. Custom environment file for configuring external Ceph Storage usage in director Operator Copy linkLink copied to clipboard!
To integrate with an external Red Hat Ceph Storage cluster, include an environment file with parameters and values similar to those shown in the following example. The example enables the CephExternal and CephClient services on your overcloud nodes, and sets the pools for different RHOSP services.
You can modify this configuration to suit your storage configuration.
To use this template in your deployment, copy the contents of the example to ceph-ansible-external.yaml in your custom_environment_files directory on your workstation.
8.5. Adding custom environment files to the overcloud configuration Copy linkLink copied to clipboard!
To enable features or set parameters in the overcloud, you must include environment files with your overcloud deployment. Director Operator (OSPdO) uses a ConfigMap object named heat-env-config to store and retrieve environment files. The ConfigMap object stores the environment files in the following format:
...
data:
<environment_file_name>: |+
<environment_file_contents>
...
data:
<environment_file_name>: |+
<environment_file_contents>
For example, the following ConfigMap contains two environment files:
Upload a set of custom environment files from a directory to a ConfigMap object that you can include as a part of your overcloud deployment.
Prerequisites
- The custom environment files for your overcloud deployment.
Procedure
Create the
heat-env-config ConfigMapobject:oc create configmap -n openstack heat-env-config \ --from-file=~/<dir_custom_environment_files>/ \ --dry-run=client -o yaml | oc apply -f -
$ oc create configmap -n openstack heat-env-config \ --from-file=~/<dir_custom_environment_files>/ \ --dry-run=client -o yaml | oc apply -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<dir_custom_environment_files>with the directory that contains the environment files you want to use in your overcloud deployment. TheConfigMapobject stores these as individualdataentries.
-
Replace
Verify that the
heat-env-config ConfigMapobject contains all the required environment files:oc get configmap/heat-env-config -n openstack
$ oc get configmap/heat-env-config -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.6. Creating Compute nodes and deploying the overcloud Copy linkLink copied to clipboard!
Compute nodes provide computing resources to your Red Hat OpenStack Platform (RHOSP) environment. You must have at least one Compute node in your overcloud and you can scale the number of Compute nodes after deployment.
Define an OpenStackBaremetalSet custom resource (CR) to create Compute nodes from bare-metal machines that the Red Hat OpenShift Container Platform (RHOCP) manages.
Use the following commands to view the OpenStackBareMetalSet CRD definition and specification schema:
oc describe crd openstackbaremetalset oc explain openstackbaremetalset.spec
$ oc describe crd openstackbaremetalset
$ oc explain openstackbaremetalset.spec
Prerequisites
-
You have used the
OpenStackNetConfigCR to create a control plane network and any additional isolated networks. -
You have created a control plane with the
OpenStackControlPlaneCRD.
Procedure
-
Create your Compute nodes by using the
OpenStackBaremetalSetCRD. For more information, see Creating Compute nodes with the OpenStackBaremetalSet CRD. -
Create the Ansible playbooks for overcloud configuration with the
OpenStackConfigGeneratorCRD. For more information, see Creating Ansible playbooks for overcloud configuration with the OpenStackConfigGenerator CRD. - Register the operating system of your overcloud. For more information, see Registering the operating system of your overcloud.
- Apply the overcloud configuration. For more information, see Applying overcloud configuration with director Operator.
Chapter 9. Accessing an overcloud deployed with director Operator Copy linkLink copied to clipboard!
After you deploy the overcloud with director Operator (OSPdO), you can access it and run commands with the openstack client tool. The main access point for the overcloud is through the OpenStackClient pod that OSPdO deploys as a part of the OpenStackControlPlane resource that you created.
9.1. Accessing the OpenStackClient pod Copy linkLink copied to clipboard!
The OpenStackClient pod is the main access point to run commands against the overcloud. This pod contains the client tools and authentication details that you require to perform actions on your overcloud. To access the pod from your workstation, you must use the oc command on your workstation to connect to the remote shell for the pod.
When you access an overcloud that you deploy without director Operator (OSPdO), you usually run the source ~/overcloudrc command to set environment variables to access the overcloud. You do not require this step with an overcloud that you deploy with OSPdO.
Procedure
Access the remote shell for
openstackclient:oc rsh -n openstack openstackclient
$ oc rsh -n openstack openstackclientCopy to Clipboard Copied! Toggle word wrap Toggle overflow Change to the
cloud-adminhome directory:cd /home/cloud-admin
$ cd /home/cloud-adminCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run your
openstackcommands. For example, you can create adefaultnetwork with the following command:openstack network create default
$ openstack network create defaultCopy to Clipboard Copied! Toggle word wrap Toggle overflow
9.2. Accessing the overcloud dashboard Copy linkLink copied to clipboard!
You access the dashboard of an overcloud that you deploy with director Operator (OSPdO) by using the same method as a standard overcloud: access the virtual IP address reserved by the control plane by using a web browser.
Procedure
Optional: To login as the
adminuser, obtain the admin password from theAdminPasswordparameter in thetripleo-passwordssecret:oc get secret tripleo-passwords -o jsonpath='{.data.tripleo-overcloud-passwords\.yaml}' | base64 -d$ oc get secret tripleo-passwords -o jsonpath='{.data.tripleo-overcloud-passwords\.yaml}' | base64 -dCopy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the IP address reserved for the control plane from your
OpenStackNetConfigCR:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Open a web browser.
- Enter the IP address for the control plane in the URL field.
- Log in to the dashboard with your username and password.
Chapter 10. Scaling Compute nodes with director Operator Copy linkLink copied to clipboard!
If you require more or fewer compute resources for your overcloud, you can scale the number of Compute nodes according to your requirements.
10.1. Adding Compute nodes to your overcloud with director Operator Copy linkLink copied to clipboard!
To add more Compute nodes to your overcloud, you must increase the node count for the compute OpenStackBaremetalSet resource. When a new node is provisioned, you create a new OpenStackConfigGenerator resource to generate a new set of Ansible playbooks, then use the OpenStackConfigVersion to create or update the OpenStackDeploy object to reapply the Ansible configuration to your overcloud.
Procedure
Check that you have enough hosts in a ready state in the
openshift-machine-apinamespace:oc get baremetalhosts -n openshift-machine-api
$ oc get baremetalhosts -n openshift-machine-apiCopy to Clipboard Copied! Toggle word wrap Toggle overflow For more information on managing your bare-metal hosts, see Managing bare metal hosts.
Increase the
countparameter for thecomputeOpenStackBaremetalSetresource:oc patch openstackbaremetalset compute --type=merge --patch '{"spec":{"count":3}}' -n openstack$ oc patch openstackbaremetalset compute --type=merge --patch '{"spec":{"count":3}}' -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
OpenStackBaremetalSetresource automatically provisions the new nodes with the Red Hat Enterprise Linux base operating system.Wait until the provisioning process completes. Check the nodes periodically to determine the readiness of the nodes:
oc get baremetalhosts -n openshift-machine-api oc get openstackbaremetalset
$ oc get baremetalhosts -n openshift-machine-api $ oc get openstackbaremetalsetCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Optional: Reserve static IP addresses for networks on the new Compute nodes. For more information, see Reserving static IP addresses for added Compute nodes with the
OpenStackNetConfigCRD. -
Generate the Ansible playbooks by using
OpenStackConfigGenerator. For more information, see Creating Ansible playbooks for overcloud configuration with the OpenStackConfigGenerator CRD. - Apply the overcloud configuration. For more information, see Applying overcloud configuration with director Operator.
10.2. Reserving static IP addresses for added Compute nodes with the OpenStackNetConfig CRD Copy linkLink copied to clipboard!
Use the OpenStackNetConfig CRD to define IP addresses that you want to reserve for the Compute node you added to your overcloud.
Use the following commands to view the OpenStackNetConfig CRD definition and specification schema:
oc describe crd openstacknetconfig oc explain openstacknetconfig.spec
$ oc describe crd openstacknetconfig
$ oc explain openstacknetconfig.spec
Procedure
-
Open the
openstacknetconfig.yamlfile for the overcloud on your workstation. Add the following configuration to
openstacknetconfig.yamlto create theOpenStackNetConfigcustom resource (CR):apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackNetConfig metadata: name: openstacknetconfig
apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackNetConfig metadata: name: openstacknetconfigCopy to Clipboard Copied! Toggle word wrap Toggle overflow Reserve static IP addresses for networks on specific nodes:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteReservations have precedence over any autogenerated IP addresses.
-
Save the
openstacknetconfig.yamldefinition file. Create the overcloud network configuration:
oc create -f openstacknetconfig.yaml -n openstack
$ oc create -f openstacknetconfig.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
To verify that the overcloud network configuration is created, view the resources for the overcloud network configuration:
oc get openstacknetconfig/openstacknetconfig
$ oc get openstacknetconfig/openstacknetconfigCopy to Clipboard Copied! Toggle word wrap Toggle overflow View the
OpenStackNetConfigAPI and child resources:oc get openstacknetconfig/openstacknetconfig -n openstack oc get openstacknetattachment -n openstack oc get openstacknet -n openstack
$ oc get openstacknetconfig/openstacknetconfig -n openstack $ oc get openstacknetattachment -n openstack $ oc get openstacknet -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you see errors, check the underlying
network-attach-definitionand node network configuration policies:oc get network-attachment-definitions -n openstack oc get nncp
$ oc get network-attachment-definitions -n openstack $ oc get nncpCopy to Clipboard Copied! Toggle word wrap Toggle overflow
10.3. Removing Compute nodes from your overcloud with director Operator Copy linkLink copied to clipboard!
To remove a Compute node from your overcloud, you must disable the Compute node, mark it for deletion, and decrease the node count for the compute OpenStackBaremetalSet resource.
If you scale the overcloud with a new node in the same role, the node reuses the host names starting with lowest ID suffix and corresponding IP reservation.
Prerequisites
- The workloads on the Compute nodes have been migrated to other Compute nodes. For more information, see Migrating virtual machine instances between Compute nodes.
Procedure
Access the remote shell for
openstackclient:oc rsh -n openstack openstackclient
$ oc rsh -n openstack openstackclientCopy to Clipboard Copied! Toggle word wrap Toggle overflow Identify the Compute node that you want to remove:
openstack compute service list
$ openstack compute service listCopy to Clipboard Copied! Toggle word wrap Toggle overflow Disable the Compute service on the node to prevent the node from scheduling new instances:
openstack compute service set <hostname> nova-compute --disable
$ openstack compute service set <hostname> nova-compute --disableCopy to Clipboard Copied! Toggle word wrap Toggle overflow Annotate the bare-metal node to prevent Metal3 from starting the node:
oc annotate baremetalhost <node> baremetalhost.metal3.io/detached=true oc logs --since=1h <metal3-pod> metal3-baremetal-operator | grep -i detach oc get baremetalhost <node> -o json | jq .status.operationalStatus
$ oc annotate baremetalhost <node> baremetalhost.metal3.io/detached=true $ oc logs --since=1h <metal3-pod> metal3-baremetal-operator | grep -i detach $ oc get baremetalhost <node> -o json | jq .status.operationalStatus "detached"Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<node>with the name of theBareMetalHostresource. -
Replace
<metal3-pod>with the name of yourmetal3pod.
-
Replace
Log in to the Compute node as the
rootuser and shut down the bare-metal node:shutdown -h now
[root@compute-0 ~]# shutdown -h nowCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the Compute node is not accessible, complete the following steps:
-
Log in to a Controller node as the
rootuser. If Instance HA is enabled, disable the STONITH device for the Compute node:
pcs stonith disable <stonith_resource_name>
[root@controller-0 ~]# pcs stonith disable <stonith_resource_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<stonith_resource_name>with the name of the STONITH resource that corresponds to the node. The resource name uses the format<resource_agent>-<host_mac>. You can find the resource agent and the host MAC address in theFencingConfigsection of thefencing.yamlfile.
-
Replace
- Use IPMI to power off the bare-metal node. For more information, see your hardware vendor documentation.
-
Log in to a Controller node as the
Retrieve the
BareMetalHostresource that corresponds to the node that you want to remove:oc get openstackbaremetalset compute -o json | jq '.status.baremetalHosts | to_entries[] | "\(.key) => \(.value | .hostRef)"'
$ oc get openstackbaremetalset compute -o json | jq '.status.baremetalHosts | to_entries[] | "\(.key) => \(.value | .hostRef)"' "compute-0, openshift-worker-3" "compute-1, openshift-worker-4"Copy to Clipboard Copied! Toggle word wrap Toggle overflow To change the status of the
annotatedForDeletionparameter totruein theOpenStackBaremetalSetresource, annotate theBareMetalHostresource withosp-director.openstack.org/delete-host=true:oc annotate -n openshift-machine-api bmh/openshift-worker-3 osp-director.openstack.org/delete-host=true --overwrite
$ oc annotate -n openshift-machine-api bmh/openshift-worker-3 osp-director.openstack.org/delete-host=true --overwriteCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Confirm that the
annotatedForDeletionstatus has changed totruein theOpenStackBaremetalSetresource:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Decrease the
countparameter for thecompute OpenStackBaremetalSetresource:oc patch openstackbaremetalset compute --type=merge --patch '{"spec":{"count":1}}' -n openstack$ oc patch openstackbaremetalset compute --type=merge --patch '{"spec":{"count":1}}' -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow When you reduce the resource count of the
OpenStackBaremetalSetresource, you trigger the corresponding controller to handle the resource deletion, which causes the following actions:-
Director Operator deletes the corresponding IP reservations from
OpenStackIPSetandOpenStackNetConfigfor the deleted node. Director Operator flags the IP reservation entry in the
OpenStackNetresource as deleted.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Director Operator deletes the corresponding IP reservations from
-
Optional: To make the IP reservations of the deleted
OpenStackBaremetalSetresource available for other roles to use, set the value of thespec.preserveReservationsparameter to false in theOpenStackNetConfigobject. Access the remote shell for
openstackclient:oc rsh openstackclient -n openstack
$ oc rsh openstackclient -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the Compute service entries from the overcloud:
openstack compute service list openstack compute service delete <service-id>
$ openstack compute service list $ openstack compute service delete <service-id>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the Compute network agents entries in the overcloud and remove them if they exist:
openstack network agent list for AGENT in $(openstack network agent list --host <scaled-down-node> -c ID -f value) ; do openstack network agent delete $AGENT ; done
$ openstack network agent list $ for AGENT in $(openstack network agent list --host <scaled-down-node> -c ID -f value) ; do openstack network agent delete $AGENT ; doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow Exit from
openstackclient:exit
$ exitCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 11. Performing a minor update of the RHOSP overcloud with director Operator Copy linkLink copied to clipboard!
A minor update of your Red Hat OpenStack Platform (RHOSP) environment involves updating the RPM packages and containers on the overcloud nodes. You might also need to update the configuration of some services. The data plane and control plane are fully available during the minor update. You must complete each of the following steps to update your RHOSP environment:
- Prepare your RHOSP environment for the minor update.
-
Optional: Update the
ovn-controllercontainer. - Update Controller nodes and composable nodes that contain Pacemaker services.
- Update Compute nodes.
- Update Red Hat Ceph Storage nodes.
- Update the Red Hat Ceph Storage cluster.
- Reboot the overcloud nodes.
Prerequisites
- You have a backup of your RHOSP deployment. For more information, see Backing up and restoring a director Operator deployed overcloud.
11.1. Preparing director Operator for a minor update Copy linkLink copied to clipboard!
To prepare your Red Hat OpenStack Platform (RHOSP) environment to perform a minor update with director Operator (OSPdO), complete the following tasks:
-
Update the
openstackclientpod. - Lock the RHOSP environment to a Red Hat Enterprise Linux (RHEL) release.
- Update RHOSP repositories.
- Update the container image preparation file.
- Disable fencing in the overcloud.
11.1.1. Updating the openstackclient pod Copy linkLink copied to clipboard!
Update the openstackclient pod container image to use the correct director heat templates and Ansible roles.
Procedure
Change to the
openstackproject:oc project openstack
$ oc project openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the CSV file:
oc edit csv
$ oc edit csvCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the following values to the new RHOSP minor version:
OPENSTACKCLIENT_IMAGE_URL_DEFAULT HEAT_API_IMAGE_URL_DEFAULT HEAT_ENGINE_IMAGE_URL_DEFAULT MARIADB_IMAGE_URL_DEFAULT RABBITMQ_IMAGE_URL_DEFAULT
OPENSTACKCLIENT_IMAGE_URL_DEFAULT HEAT_API_IMAGE_URL_DEFAULT HEAT_ENGINE_IMAGE_URL_DEFAULT MARIADB_IMAGE_URL_DEFAULT RABBITMQ_IMAGE_URL_DEFAULTCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete any existing ephemeral Heat instances:
oc delete openstackephemeralheat --all
$ oc delete openstackephemeralheat --allCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the current
imageURLfrom theopenstackclientcustom resource to update the pod to the new image:oc patch openstackclient -n openstack openstackclient --type=json -p="[{'op': 'remove', 'path': '/spec/imageURL'}]"$ oc patch openstackclient -n openstack openstackclient --type=json -p="[{'op': 'remove', 'path': '/spec/imageURL'}]"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.1.2. Locking the RHOSP environment to a RHEL release Copy linkLink copied to clipboard!
Red Hat OpenStack Platform (RHOSP) 17.1 is supported on Red Hat Enterprise Linux (RHEL) 9.2. Before you perform the update, lock the overcloud repositories to the RHEL 9.2 release to avoid upgrading the operating system to a newer minor release.
Procedure
Copy the overcloud subscription management environment file,
rhsm.yaml, toopenstackclient:oc cp rhsm.yaml openstackclient:/home/cloud-admin/rhsm.yaml
$ oc cp rhsm.yaml openstackclient:/home/cloud-admin/rhsm.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Access the remote shell for the
openstackclientpod:oc rsh openstackclient
$ oc rsh openstackclientCopy to Clipboard Copied! Toggle word wrap Toggle overflow Open the
rhsm.yamlfile and check if your subscription management configuration includes therhsm_releaseparameter. If therhsm_releaseparameter is not present, add it and set it to9.2:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Save the
rhsm.yamlfile. Create a playbook named
set_release.yamlthat contains a task to lock the operating system version to RHEL 9.2 on all nodes:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
set_release.yamlplaybook on theopenstackclientpod:ansible-playbook -i /home/cloud-admin/ctlplane-ansible-inventory /home/cloud-admin/set_release.yaml --limit Controller,Compute
$ ansible-playbook -i /home/cloud-admin/ctlplane-ansible-inventory /home/cloud-admin/set_release.yaml --limit Controller,ComputeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
--limitoption to apply the content to all RHOSP nodes. Do not run this playbook against Red Hat Ceph Storage nodes because you might have a different subscription for these nodes.NoteTo manually lock a node to a version, log in to the node and run the
subscription-manager releasecommand:sudo subscription-manager release --set=9.2
$ sudo subscription-manager release --set=9.2Copy to Clipboard Copied! Toggle word wrap Toggle overflow Exit the remote shell for the
openstackclientpod:exit
$ exitCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.1.3. Updating RHOSP repositories Copy linkLink copied to clipboard!
Update your repositories to use Red Hat OpenStack Platform (RHOSP) 17.1.
Procedure
Open the
rhsm.yamlfile and update therhsm_reposparameter to the correct repository versions:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Save the
rhsm.yamlfile. Access the remote shell for the
openstackclientpod:oc rsh openstackclient
$ oc rsh openstackclientCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a playbook named
update_rhosp_repos.yamlthat contains a task to set the repositories toRHOSP 17.1on all nodes:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
update_rhosp_repos.yamlplaybook on theopenstackclientpod:ansible-playbook -i /home/cloud-admin/ctlplane-ansible-inventory /home/cloud-admin/update_rhosp_repos.yaml --limit Controller,Compute
$ ansible-playbook -i /home/cloud-admin/ctlplane-ansible-inventory /home/cloud-admin/update_rhosp_repos.yaml --limit Controller,ComputeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
--limitoption to apply the content to all RHOSP nodes. Do not run this playbook against Red Hat Ceph Storage nodes because they use a different subscription.Create a playbook named
update_ceph_repos.yamlthat contains a task to set the repositories toRHOSP 17.1on all Red Hat Ceph Storage nodes:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
update_ceph_repos.yamlplaybook on theopenstackclientpod:ansible-playbook -i /home/cloud-admin/ctlplane-ansible-inventory /home/cloud-admin/update_ceph_repos.yaml --limit CephStorage
$ ansible-playbook -i /home/cloud-admin/ctlplane-ansible-inventory /home/cloud-admin/update_ceph_repos.yaml --limit CephStorageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
--limitoption to apply the content to Red Hat Ceph Storage nodes.Exit the remote shell for the
openstackclientpod:exit
$ exitCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.1.4. Updating the container image preparation file Copy linkLink copied to clipboard!
The container preparation file is the file that contains the ContainerImagePrepare parameter. You use this file to define the rules for obtaining container images for the overcloud.
Before you update your environment, check the file to ensure that you obtain the correct image versions.
Procedure
-
Edit the container preparation file. The default name for this file is
containers-prepare-parameter.yaml. Ensure the
tagparameter is set to17.1for each rule set:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you do not want to use a specific tag for the update, such as
17.1or17.1.1, remove thetagkey-value pair and specifytag_from_labelonly. Thetag_from_labeltag uses the installed Red Hat OpenStack Platform (RHOSP) version to determine the value for the tag to use as part of the update process. For more information about version tagging, see Guidelines for container image tagging in Customizing your Red Hat OpenStack Platform deployment.-
Save the
containers-prepare-parameter.yamlfile.
11.1.5. Disabling fencing in the overcloud Copy linkLink copied to clipboard!
Before you update the overcloud, ensure that fencing is disabled.
If fencing is deployed in your environment during the Controller nodes update process, the overcloud might detect certain nodes as disabled and attempt fencing operations, which can cause unintended results.
If you have enabled fencing in the overcloud, you must temporarily disable fencing for the duration of the update.
Procedure
Access the remote shell for the
openstackclientpod:oc rsh openstackclient
$ oc rsh openstackclientCopy to Clipboard Copied! Toggle word wrap Toggle overflow Log in to a Controller node and run the Pacemaker command to disable fencing:
ssh <controller-0.ctlplane> "sudo pcs property set stonith-enabled=false"
$ ssh <controller-0.ctlplane> "sudo pcs property set stonith-enabled=false"Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<controller-0.ctlplane>with the name of your Controller node.
-
Replace
Exit the remote shell for the
openstackclientpod:exit
$ exitCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional Resources
11.2. Running the overcloud update preparation for director Operator Copy linkLink copied to clipboard!
To prepare the overcloud for the update process, generate an update prepare configuration, which creates updated ansible playbooks and prepares the nodes for the update.
Procedure
Create a file on your workstation named
osconfiggenerator-update-prepare.yamlto define theOpenStackConfigGeneratorresource:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- List all the heat environment files, including any custom heat environment files that you created for your deployment.
Apply the configuration:
oc apply -f osconfiggenerator-update-prepare.yaml
$ oc apply -f osconfiggenerator-update-prepare.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until the update preparation process completes.
11.3. Updating the ovn-controller container on all overcloud servers Copy linkLink copied to clipboard!
If you deployed your overcloud with the Modular Layer 2 Open Virtual Network mechanism driver (ML2/OVN), update the ovn-controller container to the latest Red Hat OpenStack Platform (RHOSP) 17.1 version. The update occurs on every overcloud server that runs the ovn-controller container.
The following procedure updates the ovn-controller containers on Compute nodes before it updates the ovn-northd service on Controller nodes. If you accidentally update the ovn-northd service before following this procedure, you might not be able to reach your virtual machine instances or create new instances or virtual networks. The following procedure restores connectivity.
Procedure
Create an
OpenStackDeploycustom resource (CR) namedosdeploy-ovn-update.yaml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the updated configuration:
oc apply -f osdeploy-ovn-update.yaml
$ oc apply -f osdeploy-ovn-update.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Wait until the
ovn-controllercontainer update completes.
11.4. Updating all Controller nodes Copy linkLink copied to clipboard!
Update all the Controller nodes to the latest Red Hat OpenStack Platform (RHOSP) 17.1 version.
Procedure
Create an
OpenStackDeploycustom resource (CR) namedosdeploy-controller-update.yaml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the updated configuration:
oc apply -f osdeploy-controller-update.yaml
$ oc apply -f osdeploy-controller-update.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until the Controller node update completes.
11.5. Updating all Compute nodes Copy linkLink copied to clipboard!
Update all Compute nodes to the latest Red Hat OpenStack Platform (RHOSP) 17.1 version. To update Compute nodes, create an OpenStackDeploy custom resource (CR) with the limit: Compute option to restrict operations only to the Compute nodes.
Procedure
Create an
OpenStackDeployCR namedosdeploy-compute-update.yaml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the updated configuration:
oc apply -f osdeploy-compute-update.yaml
$ oc apply -f osdeploy-compute-update.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until the Compute node update completes.
11.6. Updating all HCI Compute nodes Copy linkLink copied to clipboard!
Update the Hyperconverged Infrastructure (HCI) Compute nodes to the latest Red Hat OpenStack Platform (RHOSP) 17.1 version. To update the HCI Compute nodes, create an OpenStackDeploy custom resource (CR) with the limit: ComputeHCI option to restrict operations to only the HCI nodes. You must also create an OpenStackDeploy CR with the mode: external-update and tags: ["ceph"] options to perform an update to a containerized Red Hat Ceph Storage 4 cluster.
Procedure
Create an
OpenStackDeployCR namedosdeploy-computehci-update.yaml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the updated configuration:
oc apply -f osdeploy-computehci-update.yaml
$ oc apply -f osdeploy-computehci-update.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until the ComputeHCI node update completes.
Create an
OpenStackDeployCR namedosdeploy-ceph-update.yaml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the updated configuration:
oc apply -f osdeploy-ceph-update.yaml
$ oc apply -f osdeploy-ceph-update.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until the Red Hat Ceph Storage node update completes.
11.7. Updating all Red Hat Ceph Storage nodes Copy linkLink copied to clipboard!
Update the Red Hat Ceph Storage nodes to the latest Red Hat OpenStack Platform (RHOSP) 17.1 version.
RHOSP 17.1 is supported on RHEL 9.2. However, hosts that are mapped to the CephStorage role update to the latest major RHEL release. For more information, see Red Hat Ceph Storage: Supported configurations.
Procedure
Create an
OpenStackDeploycustom resource (CR) namedosdeploy-cephstorage-update.yaml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the updated configuration:
oc apply -f osdeploy-cephstorage-update.yaml
$ oc apply -f osdeploy-cephstorage-update.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until the Red Hat Ceph Storage node update completes.
Create an
OpenStackDeployCR namedosdeploy-ceph-update.yaml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the updated configuration:
oc apply -f osdeploy-ceph-update.yaml
$ oc apply -f osdeploy-ceph-update.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until the Red Hat Ceph Storage node update completes.
11.8. Updating the Red Hat Ceph Storage cluster Copy linkLink copied to clipboard!
Update the director-deployed Red Hat Ceph Storage cluster to the latest version that is compatible with Red Hat OpenStack Platform (RHOSP) 17.1 by using the cephadm Orchestrator.
This procedure uses cephadm to upgrade your deployment. If you are using pre-provisioned nodes, cephadm is available by default in the first Controller node. You can manually install it in the other Controllers to access the cephadm shell.
For more information about installing cephadm, see the Red Hat Ceph Storage 6 Installation Guide.
Procedure
Access the remote shell for the
openstackclientpod:oc rsh openstackclient
$ oc rsh openstackclientCopy to Clipboard Copied! Toggle word wrap Toggle overflow Log in to the first Controller node:
ssh <controller-0.ctlplane>
$ ssh <controller-0.ctlplane>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<controller-0.ctlplane>with the name of the first Controller node in your deployment.
-
Replace
Log into the
cephadmshell:sudo cephadm shell
[cloud-admin@controller-0 ~]$ sudo cephadm shellCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Upgrade your Red Hat Ceph Storage cluster by using
cephadm. For more information, see Upgrade a Red Hat Ceph Storage cluster using cephadm in the Red Hat Ceph Storage 6 Upgrade Guide. Exit the remote shell for the
openstackclientpod:exit
$ exitCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.9. Performing online database updates Copy linkLink copied to clipboard!
Some overcloud components require an online update or migration of their databases tables. Online database updates apply to the following components:
- Block Storage service (cinder)
- Compute service (nova)
Procedure
Create an
OpenStackDeploycustom resource (CR) namedosdeploy-online-migration.yaml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the updated configuration:
oc apply -f osdeploy-online-migration.yaml
$ oc apply -f osdeploy-online-migration.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.10. Re-enabling fencing in the overcloud Copy linkLink copied to clipboard!
To update to the latest Red Hat OpenStack Platform (RHOSP) 17.1, you must re-enable fencing in the overcloud.
Procedure
Access the remote shell for the
openstackclientpod:oc rsh openstackclient
$ oc rsh openstackclientCopy to Clipboard Copied! Toggle word wrap Toggle overflow Log in to a Controller node and run the Pacemaker command to enable fencing:
ssh <controller-0.ctlplane> "sudo pcs property set stonith-enabled=true"
$ ssh <controller-0.ctlplane> "sudo pcs property set stonith-enabled=true"Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<controller-0.ctlplane>with the name of your Controller node.
-
Replace
Exit the remote shell for the
openstackclientpod:exit
$ exitCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.11. Rebooting the overcloud Copy linkLink copied to clipboard!
After you perform a minor Red Hat OpenStack Platform (RHOSP) update to the latest 17.1 version, reboot your overcloud. The reboot refreshes the nodes with any associated kernel, system-level, and container component updates. These updates provide performance and security benefits. Plan downtime to perform the reboot procedures.
Use the following guidance to understand how to reboot different node types:
- If you reboot all nodes in one role, reboot each node individually. If you reboot all nodes in a role simultaneously, service downtime can occur during the reboot operation.
Complete the reboot procedures on the nodes in the following order:
11.11.1. Rebooting Controller and composable nodes Copy linkLink copied to clipboard!
Reboot Controller nodes and standalone nodes based on composable roles, and exclude Compute nodes and Ceph Storage nodes.
Procedure
- Log in to the node that you want to reboot.
Optional: If the node uses Pacemaker resources, stop the cluster:
sudo pcs cluster stop
[tripleo-admin@overcloud-controller-0 ~]$ sudo pcs cluster stopCopy to Clipboard Copied! Toggle word wrap Toggle overflow Reboot the node:
sudo reboot
[tripleo-admin@overcloud-controller-0 ~]$ sudo rebootCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until the node boots.
Verification
Verify that the services are enabled.
If the node uses Pacemaker services, check that the node has rejoined the cluster:
sudo pcs status
[tripleo-admin@overcloud-controller-0 ~]$ sudo pcs statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the node uses Systemd services, check that all services are enabled:
sudo systemctl status
[tripleo-admin@overcloud-controller-0 ~]$ sudo systemctl statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the node uses containerized services, check that all containers on the node are active:
sudo podman ps
[tripleo-admin@overcloud-controller-0 ~]$ sudo podman psCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.11.2. Rebooting a Ceph Storage (OSD) cluster Copy linkLink copied to clipboard!
Complete the following steps to reboot a cluster of Ceph Storage (OSD) nodes.
Prerequisites
On a Ceph Monitor or Controller node that is running the
ceph-monservice, check that the Red Hat Ceph Storage cluster status is healthy and the pg status isactive+clean:sudo cephadm shell -- ceph status
$ sudo cephadm shell -- ceph statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the Ceph cluster is healthy, it returns a status of
HEALTH_OK.If the Ceph cluster status is unhealthy, it returns a status of
HEALTH_WARNorHEALTH_ERR. For troubleshooting guidance, see the Red Hat Ceph Storage 5 Troubleshooting Guide or the Red Hat Ceph Storage 6 Troubleshooting Guide.
Procedure
Log in to a Ceph Monitor or Controller node that is running the
ceph-monservice, and disable Ceph Storage cluster rebalancing temporarily:sudo cephadm shell -- ceph osd set noout sudo cephadm shell -- ceph osd set norebalance
$ sudo cephadm shell -- ceph osd set noout $ sudo cephadm shell -- ceph osd set norebalanceCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you have a multistack or distributed compute node (DCN) architecture, you must specify the Ceph cluster name when you set the
nooutandnorebalanceflags. For example:sudo cephadm shell -c /etc/ceph/<cluster>.conf -k /etc/ceph/<cluster>.client.keyring.- Select the first Ceph Storage node that you want to reboot and log in to the node.
Reboot the node:
sudo reboot
$ sudo rebootCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until the node boots.
Log in to the node and check the Ceph cluster status:
sudo cephadm shell -- ceph status
$ sudo cephadm shell -- ceph statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the
pgmapreports allpgsas normal (active+clean).- Log out of the node, reboot the next node, and check its status. Repeat this process until you have rebooted all Ceph Storage nodes.
When complete, log in to a Ceph Monitor or Controller node that is running the
ceph-monservice and enable Ceph cluster rebalancing:sudo cephadm shell -- ceph osd unset noout sudo cephadm shell -- ceph osd unset norebalance
$ sudo cephadm shell -- ceph osd unset noout $ sudo cephadm shell -- ceph osd unset norebalanceCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you have a multistack or distributed compute node (DCN) architecture, you must specify the Ceph cluster name when you unset the
nooutandnorebalanceflags. For example:sudo cephadm shell -c /etc/ceph/<cluster>.conf -k /etc/ceph/<cluster>.client.keyringPerform a final status check to verify that the cluster reports
HEALTH_OK:sudo cephadm shell ceph status
$ sudo cephadm shell ceph statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.11.3. Rebooting Compute nodes Copy linkLink copied to clipboard!
To ensure minimal downtime of instances in your Red Hat OpenStack Platform environment, the Migrating instances workflow outlines the steps you must complete to migrate instances from the Compute node that you want to reboot.
Migrating instances workflow
- Decide whether to migrate instances to another Compute node before rebooting the node.
- Select and disable the Compute node that you want to reboot so that it does not provision new instances.
- Migrate the instances to another Compute node.
- Reboot the empty Compute node.
- Enable the empty Compute node.
Prerequisites
Before you reboot the Compute node, you must decide whether to migrate instances to another Compute node while the node is rebooting.
Review the list of migration constraints that you might encounter when you migrate virtual machine instances between Compute nodes. For more information, see Migration constraints in Configuring the Compute service for instance creation.
NoteIf you have a Multi-RHEL environment, and you want to migrate virtual machines from a Compute node that is running RHEL 9.2 to a Compute node that is running RHEL 8.4, only cold migration is supported. For more information about cold migration, see Cold migrating an instance in Configuring the Compute service for instance creation.
If you cannot migrate the instances, you can set the following core template parameters to control the state of the instances after the Compute node reboots:
NovaResumeGuestsStateOnHostBoot-
Determines whether to return instances to the same state on the Compute node after reboot. When set to
False, the instances remain down and you must start them manually. The default value isFalse. NovaResumeGuestsShutdownTimeoutNumber of seconds to wait for an instance to shut down before rebooting. It is not recommended to set this value to
0. The default value is300.For more information about overcloud parameters and their usage, see Overcloud parameters.
Procedure
-
Log in to the undercloud as the
stackuser. Retrieve a list of your Compute nodes to identify the host name of the node that you want to reboot:
source ~/overcloudrc openstack compute service list
(undercloud)$ source ~/overcloudrc (overcloud)$ openstack compute service listCopy to Clipboard Copied! Toggle word wrap Toggle overflow Identify the host name of the Compute node that you want to reboot.
Disable the Compute service on the Compute node that you want to reboot:
openstack compute service list openstack compute service set <hostname> nova-compute --disable
(overcloud)$ openstack compute service list (overcloud)$ openstack compute service set <hostname> nova-compute --disableCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<hostname>with the host name of your Compute node.
-
Replace
List all instances on the Compute node:
openstack server list --host <hostname> --all-projects
(overcloud)$ openstack server list --host <hostname> --all-projectsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: To migrate the instances to another Compute node, complete the following steps:
If you decide to migrate the instances to another Compute node, use one of the following commands:
To migrate the instance to a different host, run the following command:
(overcloud) $ openstack server migrate <instance_id> --live <target_host> --wait
(overcloud) $ openstack server migrate <instance_id> --live <target_host> --waitCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<instance_id>with your instance ID. -
Replace
<target_host>with the host that you are migrating the instance to.
-
Replace
Let
nova-schedulerautomatically select the target host:(overcloud) $ nova live-migration <instance_id>
(overcloud) $ nova live-migration <instance_id>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Live migrate all instances at once:
nova host-evacuate-live <hostname>
$ nova host-evacuate-live <hostname>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
novacommand might cause some deprecation warnings, which are safe to ignore.
- Wait until migration completes.
Confirm that the migration was successful:
(overcloud) $ openstack server list --host <hostname> --all-projects
(overcloud) $ openstack server list --host <hostname> --all-projectsCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Continue to migrate instances until none remain on the Compute node.
Log in to the Compute node and reboot the node:
sudo reboot
[tripleo-admin@overcloud-compute-0 ~]$ sudo rebootCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until the node boots.
Re-enable the Compute node:
source ~/overcloudrc
$ source ~/overcloudrc (overcloud) $ openstack compute service set <hostname> nova-compute --enableCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the Compute node is enabled:
(overcloud) $ openstack compute service list
(overcloud) $ openstack compute service listCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.11.4. Validating RHOSP after the overcloud update Copy linkLink copied to clipboard!
After you update your Red Hat OpenStack Platform (RHOSP) environment, validate your overcloud with the tripleo-validations playbooks.
For more information about validations, see Using the validation framework in Installing and managing Red Hat OpenStack Platform with director.
Procedure
-
Log in to the undercloud host as the
stackuser. Source the
stackrcundercloud credentials file:source ~/stackrc
$ source ~/stackrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the validation:
validation run -i ~/overcloud-deploy/<stack>/config-download/<stack>/tripleo-ansible-inventory.yaml --group post-update
$ validation run -i ~/overcloud-deploy/<stack>/config-download/<stack>/tripleo-ansible-inventory.yaml --group post-updateCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Replace <stack> with the name of the stack.
Verification
- To view the results of the validation report, see Viewing validation history in Installing and managing Red Hat OpenStack Platform with director.
If a host is not found when you run a validation, the command reports the status as SKIPPED. A status of SKIPPED means that the validation is not executed, which is expected. Additionally, if a validation’s pass criteria is not met, the command reports the status as FAILED. A FAILED validation does not prevent you from using your updated RHOSP environment. However, a FAILED validation can indicate an issue with your environment.
Chapter 12. Deploying TLS for public endpoints using director Operator Copy linkLink copied to clipboard!
Deploy the overcloud using TLS to create public endpoint IPs or DNS names for director Operator (OSPdO).
Prerequisites
- You have installed OSPdO on an operational Red Hat OpenShift Container Platform (RHOCP) cluster.
-
You have installed the
occommand line tool on your workstation. - You have created the certificate authority, key, and certificate. For more information, see Enabling SSL/TLS on overcloud public endpoints.
12.1. TLS for public endpoint IP addresses Copy linkLink copied to clipboard!
To reference public endpoint IP addresses, add your CA certificates to the openstackclient pod by creating a ConfigMap resource to store the CA certificates, then referencing that ConfigMap resource in the OpenStackControlPlane resource.
Procedure
Create a
ConfigMapresource to store the CA certificates:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
OpenStackControlPlaneresource and reference theConfigMapresource:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<overcloud>with the name of your overcloud control plane.
-
Replace
-
Create a file in the
~/custom_environment_filesdirectory namedtls-certs.yaml, that specifies the generated certificates for the deployment by using theSSLCertificate,SSLIntermediateCertificate,SSLKey, andCAMapparameters. Update the
heatEnvConfigMapto add thetls-certs.yamlfile:oc create configmap -n openstack heat-env-config --from-file=~/custom_environment_files/ --dry-run=client -o yaml | oc apply -f -
$ oc create configmap -n openstack heat-env-config --from-file=~/custom_environment_files/ --dry-run=client -o yaml | oc apply -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an
OpenStackConfigGeneratorresource and add the requiredheatEnvsconfiguration files to configure TLS for public endpoint IPs:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Generate the Ansible playbooks by using
OpenStackConfigGeneratorand apply the overcloud configuration. For more information, see Configuring and deploying the overcloud with director Operator.
12.2. TLS for public endpoint DNS names Copy linkLink copied to clipboard!
To reference public endpoint DNS names, add your CA certificates to the openstackclient pod by creating a ConfigMap resource to store the CA certificates, then referencing that ConfigMap resource in the OpenStackControlPlane resource.
Procedure
Create a
ConfigMapresource to store the CA certificates:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
OpenStackControlPlaneresource and reference theConfigMapresource:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<overcloud>with the name of your overcloud control plane.
-
Replace
-
Create a file in the
~/custom_environment_filesdirectory namedtls-certs.yaml, that specifies the generated certificates for the deployment by using theSSLCertificate,SSLIntermediateCertificate,SSLKey, andCAMapparameters. Update the
heatEnvConfigMapto add thetls-certs.yamlfile:oc create configmap -n openstack heat-env-config --from-file=~/custom_environment_files/ --dry-run=client -o yaml | oc apply -f -
$ oc create configmap -n openstack heat-env-config --from-file=~/custom_environment_files/ --dry-run=client -o yaml | oc apply -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an
OpenStackConfigGeneratorresource and add the requiredheatEnvsconfiguration files to configure TLS for public endpoint DNS names:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Generate the Ansible playbooks by using
OpenStackConfigGeneratorand apply the overcloud configuration. For more information, see Configuring and deploying the overcloud with director Operator.
Chapter 13. Changing service account passwords using director Operator Copy linkLink copied to clipboard!
Red Hat OpenStack Platform (RHOSP) services and the databases that they use are authenticated by their Identity service (keystone) credentials. The Identity service generates these RHOSP passwords during the initial RHOSP deployment process. You might be required to periodically update passwords for threat mitigation or security compliance. You can use tools native to director Operator (OSPdO) to change many of the generated passwords after your RHOSP environment is deployed.
13.1. Rotating overcloud service account passwords with director Operator Copy linkLink copied to clipboard!
You can rotate the overcloud service account passwords used with a director Operator (OSPdO) deployed Red Hat OpenStack Platform (RHOSP) environment.
Procedure
Create a backup of the current
tripleo-passwordssecret:oc get secret tripleo-passwords -n openstack -o yaml > tripleo-passwords_backup.yaml
$ oc get secret tripleo-passwords -n openstack -o yaml > tripleo-passwords_backup.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a plain text file named
tripleo-overcloud-passwords_preserve_listto specify that the passwords for the following services should not be rotated:Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can add additional services to this list if there are other services for which you want to preserve the password.
Create a password parameter file,
tripleo-overcloud-passwords.yaml, that lists the passwords that should not be modified:oc get secret tripleo-passwords -n openstack \ -o jsonpath='{.data.tripleo-overcloud-passwords\.yaml}' \ | base64 -d | grep -f ./tripleo-overcloud-passwords_preserve_list > tripleo-overcloud-passwords.yaml$ oc get secret tripleo-passwords -n openstack \ -o jsonpath='{.data.tripleo-overcloud-passwords\.yaml}' \ | base64 -d | grep -f ./tripleo-overcloud-passwords_preserve_list > tripleo-overcloud-passwords.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Validate that the
tripleo-overcloud-passwords.yamlfile contains the passwords that you do not want to rotate. Update the
tripleo-passwordsecret:oc create secret generic tripleo-passwords -n openstack \ --from-file=./tripleo-overcloud-passwords.yaml \ --dry-run=client -o yaml | oc apply -f -
$ oc create secret generic tripleo-passwords -n openstack \ --from-file=./tripleo-overcloud-passwords.yaml \ --dry-run=client -o yaml | oc apply -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Create Ansible playbooks to configure the overcloud with the OpenStackConfigGenerator CRD. For more information, see Creating Ansible playbooks for overcloud configuration with the OpenStackConfigGenerator CRD.
- Apply the updated configuration. For more information, see Applying overcloud configuration with director Operator.
Verification
Compare the new NovaPassword in the secret to what is now installed on the Controller node.
Get the password from the updated secret:
oc get secret tripleo-passwords -n openstack -o jsonpath='{.data.tripleo-overcloud-passwords\.yaml}' | base64 -d | grep NovaPassword$ oc get secret tripleo-passwords -n openstack -o jsonpath='{.data.tripleo-overcloud-passwords\.yaml}' | base64 -d | grep NovaPasswordCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NovaPassword: hp4xpt7t2p79ktqjjnxpqwbp6
NovaPassword: hp4xpt7t2p79ktqjjnxpqwbp6Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the password for the Compute service (nova) running on the Controller nodes:
Access the
openstackclientremote shell:oc rsh openstackclient -n openstack
$ oc rsh openstackclient -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that you are in the home directory:
cd
$ cdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the Compute service password:
ansible -i /home/cloud-admin/ctlplane-ansible-inventory Controller -b -a "grep ^connection /var/lib/config-data/puppet-generated/nova/etc/nova/nova.conf"
$ ansible -i /home/cloud-admin/ctlplane-ansible-inventory Controller -b -a "grep ^connection /var/lib/config-data/puppet-generated/nova/etc/nova/nova.conf"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
172.22.0.120 | CHANGED | rc=0 >> connection=mysql+pymysql://nova_api:hp4xpt7t2p79ktqjjnxpqwbp6@172.17.0.10/nova_api?read_default_file=/etc/my.cnf.d/tripleo.cnf&read_default_group=tripleo connection=mysql+pymysql://nova:hp4xpt7t2p79ktqjjnxpqwbp6@172.17.0.10/nova?read_default_file=/etc/my.cnf.d/tripleo.cnf&read_default_group=tripleo
172.22.0.120 | CHANGED | rc=0 >> connection=mysql+pymysql://nova_api:hp4xpt7t2p79ktqjjnxpqwbp6@172.17.0.10/nova_api?read_default_file=/etc/my.cnf.d/tripleo.cnf&read_default_group=tripleo connection=mysql+pymysql://nova:hp4xpt7t2p79ktqjjnxpqwbp6@172.17.0.10/nova?read_default_file=/etc/my.cnf.d/tripleo.cnf&read_default_group=tripleoCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 14. Deploying nodes with spine-leaf configuration by using director Operator Copy linkLink copied to clipboard!
Deploy nodes with spine-leaf networking architecture to replicate an extensive network topology within your environment. Current restrictions allow only one provisioning network for Metal3.
14.1. Creating or updating the OpenStackNetConfig custom resource to define all subnets Copy linkLink copied to clipboard!
Define your OpenStackNetConfig custom resource (CR) and specify the subnets for the overcloud networks. Red Hat OpenStack Platform (RHOSP) director Operator (OSPdO) then renders the configuration and creates, or updates, the network topology.
Prerequisites
- You have installed OSPdO on an operational Red Hat OpenShift Container Platform (RHOCP) cluster.
-
You have installed the
occommand line tool on your workstation.
Procedure
Create a configuration file named
openstacknetconfig.yaml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the internal API network:
oc create -f openstacknetconfig.yaml -n openstack
$ oc create -f openstacknetconfig.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the resources and child resources for the
OpenStackNetConfigresource are created:oc get openstacknetconfig/openstacknetconfig -n openstack oc get openstacknetattachment -n openstack oc get openstacknet -n openstack
$ oc get openstacknetconfig/openstacknetconfig -n openstack $ oc get openstacknetattachment -n openstack $ oc get openstacknet -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow
14.2. Add roles for leaf networks to your deployment Copy linkLink copied to clipboard!
To add roles for the leaf networks to your deployment, update the roles_data.yaml configuration file. If the leaf network roles have different NIC configurations, you can create Ansible NIC templates for each role to configure the spine-leaf networking, register the NIC templates, and create the ConfigMap custom resource.
You must use roles_data.yaml as the filename.
Procedure
Update the
roles_data.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Create a NIC template for each Compute role. For example Ansible NIC templates, see https://github.com/openstack/tripleo-ansible/tree/stable/wallaby/tripleo_ansible/roles/tripleo_network_config/templates.
Add the NIC templates for the new nodes to an environment file:
parameter_defaults: ComputeNetworkConfigTemplate: 'multiple_nics_vlans_dvr.j2' ComputeLeaf1NetworkConfigTemplate: 'multiple_nics_vlans_dvr.j2' ComputeLeaf2NetworkConfigTemplate: 'multiple_nics_compute_leaf_2_vlans_dvr.j2'
parameter_defaults: ComputeNetworkConfigTemplate: 'multiple_nics_vlans_dvr.j2' ComputeLeaf1NetworkConfigTemplate: 'multiple_nics_vlans_dvr.j2' ComputeLeaf2NetworkConfigTemplate: 'multiple_nics_compute_leaf_2_vlans_dvr.j2'Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the
~/custom_environment_filesdirectory, archive theroles_data.yamlfile, the environment file, and the NIC templates into a tarball:tar -cvzf custom-spine-leaf-config.tar.gz *.yaml
$ tar -cvzf custom-spine-leaf-config.tar.gz *.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
tripleo-tarball-configConfigMapresource:oc create configmap tripleo-tarball-config --from-file=custom-spine-leaf-config.tar.gz -n openstack
$ oc create configmap tripleo-tarball-config --from-file=custom-spine-leaf-config.tar.gz -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow
14.3. Deploying the overcloud with multiple routed networks Copy linkLink copied to clipboard!
To deploy the overcloud with multiple sets of routed networking, create the control plane and the Compute nodes for the spine-leaf network, and then render and apply the Ansible playbooks. To create the control plane, specify the resources for the Controller nodes. To create the Compute nodes for the leafs from bare-metal machines, include the resource specification in the OpenStackBaremetalSet custom resource.
Procedure
Create a file named
openstack-controller.yamlon your workstation. Include the resource specification for the Controller nodes. The following example shows a specification for a control plane that consists of three Controller nodes:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the control plane:
oc create -f openstack-controller.yaml -n openstack
$ oc create -f openstack-controller.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Wait until Red Hat OpenShift Container Platform (RHOCP) creates the resources related to the
OpenStackControlPlaneresource. Create a file on your workstation for each Compute leaf, for example,
openstack-computeleaf1.yaml. Include the resource specification for the Compute nodes for the leaf. The following example shows a specification for one Compute leaf that includes one Compute node:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the Compute nodes for each leaf:
oc create -f openstack-computeleaf1.yaml -n openstack
$ oc create -f openstack-computeleaf1.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Generate the Ansible playbooks by using
OpenStackConfigGeneratorand apply the overcloud configuration. For more information, see Configuring and deploying the overcloud with director Operator.
Verification
View the resource for the control plane:
oc get openstackcontrolplane/overcloud -n openstack
$ oc get openstackcontrolplane/overcloud -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow View the
OpenStackVMSetresources to verify the creation of the control plane virtual machine (VM) set:oc get openstackvmsets -n openstack
$ oc get openstackvmsets -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow View the VM resources to verify the creation of the control plane VMs in OpenShift Virtualization:
oc get virtualmachines -n openstack
$ oc get virtualmachines -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow Test access to the
openstackclientpod remote shell:oc rsh -n openstack openstackclient
$ oc rsh -n openstack openstackclientCopy to Clipboard Copied! Toggle word wrap Toggle overflow View the resource for each Compute leaf:
oc get openstackbaremetalset/computeleaf1 -n openstack
$ oc get openstackbaremetalset/computeleaf1 -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow View the bare-metal machines managed by RHOCP to verify the creation of the Compute nodes:
oc get baremetalhosts -n openshift-machine-api
$ oc get baremetalhosts -n openshift-machine-apiCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 15. Backing up and restoring a director Operator deployed overcloud Copy linkLink copied to clipboard!
To back up a Red Hat OpenStack Platform (RHOSP) overcloud that was deployed with director Operator (OSPdO), you must backup the Red Hat OpenShift Container Platform (RHOCP) OSPdO resources, and the use the Relax-and-Recover (ReaR) tool to backup the control plane and overcloud.
15.1. Backing up and restoring director Operator resources Copy linkLink copied to clipboard!
Red Hat OpenStack Platform (RHOSP) director Operator (OSPdO) provides custom resource definitions (CRDs) for backing up and restoring a deployment. You do not have to manually export and import multiple configurations. OSPdO knows which custom resources (CRs), including the ConfigMap and Secret CRs, that it needs to create a complete backup because it is aware of the state of all resources. Therefore, OSPdO does not backup any configuration that is in an incomplete or error state.
To backup and restore an OSPdO deployment, you create an OpenStackBackupRequest CR to initiate the creation or restoration of a backup. Your OpenStackBackupRequest CR creates the OpenStackBackup CR that stores the backup of the custom resources (CRs), the ConfigMap and the Secret configurations for the specified namespace.
15.1.1. Backing up director Operator resources Copy linkLink copied to clipboard!
To create a backup you must create an OpenStackBackupRequest custom resource (CR) for the namespace. The OpenStackBackup CR is created when the OpenStackBackupRequest object is created in save mode.
Procedure
-
Create a file named
openstack_backup.yamlon your workstation. Add the following configuration to your
openstack_backup.yamlfile to create theOpenStackBackupRequestcustom resource (CR):Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteOSPdO attempts to include all
ConfigMapandSecretobjects associated with the OSPdO CRs in the namespace, such asOpenStackControlPlaneandOpenStackBaremetalSet. You do not need to include those in the additional lists.-
Save the
openstack_backup.yamlfile. Create the
OpenStackBackupRequestCR:oc create -f openstack_backup.yaml -n openstack
$ oc create -f openstack_backup.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow Monitor the creation status of the
OpenStackBackupRequestCR:oc get openstackbackuprequest openstackbackupsave -n openstack
$ oc get openstackbackuprequest openstackbackupsave -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
Quiescingstate indicates that OSPdO is waiting for the CRs to reach their finished state. The number of CRs can affect how long it takes to finish creating the backup.NAME OPERATION SOURCE STATUS COMPLETION TIMESTAMP openstackbackupsave save Quiescing
NAME OPERATION SOURCE STATUS COMPLETION TIMESTAMP openstackbackupsave save QuiescingCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the status remains in the
Quiescingstate for longer than expected, you can investigate the OSPdO logs to check progress:oc logs <operator_pod> -c manager -f
$ oc logs <operator_pod> -c manager -f 2022-01-11T18:26:15.180Z INFO controllers.OpenStackBackupRequest Quiesce for save for OpenStackBackupRequest openstackbackupsave is waiting for: [OpenStackBaremetalSet: compute, OpenStackControlPlane: overcloud, OpenStackVMSet: controller]Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<operator_pod>with the name of the Operator pod.
-
Replace
The
Savedstate indicates that theOpenStackBackupCR is created.NAME OPERATION SOURCE STATUS COMPLETION TIMESTAMP openstackbackupsave save Saved 2022-01-11T19:12:58Z
NAME OPERATION SOURCE STATUS COMPLETION TIMESTAMP openstackbackupsave save Saved 2022-01-11T19:12:58ZCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
Errorstate indicates the backup has failed to create. Review the request contents to find the error:oc get openstackbackuprequest openstackbackupsave -o yaml -n openstack
$ oc get openstackbackuprequest openstackbackupsave -o yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow
View the
OpenStackBackupresource to confirm it exists:oc get openstackbackup -n openstack
$ oc get openstackbackup -n openstack NAME AGE openstackbackupsave-1641928378 6m7sCopy to Clipboard Copied! Toggle word wrap Toggle overflow
15.1.2. Restoring director Operator resources from a backup Copy linkLink copied to clipboard!
When you request to restore a backup, Red Hat OpenStack Platform (RHOSP) director Operator (OSPdO) takes the contents of the specified OpenStackBackup resource and attempts to apply them to all existing custom resources (CRs), ConfigMap and Secret resources present within the namespace. OSPdO overwrites any existing resources in the namespace, and creates new resources for those not found within the namespace.
Procedure
List the available backups:
oc get osbackup
$ oc get osbackupCopy to Clipboard Copied! Toggle word wrap Toggle overflow Inspect the details of a specific backup:
oc get backup <name> -o yaml
$ oc get backup <name> -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<name>with the name of the backup you want to inspect.
-
Replace
-
Create a file named
openstack_restore.yamlon your workstation. Add the following configuration to your
openstack_restore.yamlfile to create theOpenStackBackupRequestcustom resource (CR):Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<mode>with one of the following options:-
restore: Requests a restore from an existingOpenStackBackup. -
cleanRestore: Completely wipes the existing OSPdO resources within the namespace before restoring and creating new resources from the existingOpenStackBackup.
-
-
Replace
<restore_source>with the ID of theOpenStackBackupto restore, for example,openstackbackupsave-1641928378.
-
Save the
openstack_restore.yamlfile. Create the
OpenStackBackupRequestCR:oc create -f openstack_restore.yaml -n openstack
$ oc create -f openstack_restore.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow Monitor the creation status of the
OpenStackBackupRequestCR:oc get openstackbackuprequest openstackbackuprestore -n openstack
$ oc get openstackbackuprequest openstackbackuprestore -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
Loadingstate indicates that all resources from theOpenStackBackupare being applied against the cluster.NAME OPERATION SOURCE STATUS COMPLETION TIMESTAMP openstackbackuprestore restore openstackbackupsave-1641928378 Loading
NAME OPERATION SOURCE STATUS COMPLETION TIMESTAMP openstackbackuprestore restore openstackbackupsave-1641928378 LoadingCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
Reconcilingstate indicates that all resources are loaded and OSPdO has begun reconciling to attempt to provision all resources.NAME OPERATION SOURCE STATUS COMPLETION TIMESTAMP openstackbackuprestore restore openstackbackupsave-1641928378 Reconciling
NAME OPERATION SOURCE STATUS COMPLETION TIMESTAMP openstackbackuprestore restore openstackbackupsave-1641928378 ReconcilingCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
Restoredstate indicates that theOpenStackBackupCR has been restored.NAME OPERATION SOURCE STATUS COMPLETION TIMESTAMP openstackbackuprestore restore openstackbackupsave-1641928378 Restored 2022-01-12T13:48:57Z
NAME OPERATION SOURCE STATUS COMPLETION TIMESTAMP openstackbackuprestore restore openstackbackupsave-1641928378 Restored 2022-01-12T13:48:57ZCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
Errorstate indicates the restoration has failed. Review the request contents to find the error:oc get openstackbackuprequest openstackbackuprestore -o yaml -n openstack
$ oc get openstackbackuprequest openstackbackuprestore -o yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow
15.2. Backing up and restoring a director Operator deployed overcloud with the Relax-and-Recover tool Copy linkLink copied to clipboard!
To back up a director Operator deployed overcloud with the Relax-and-Recover (ReaR) tool, you configure the backup node, install the ReaR tool on the control plane, and create the backup image. You can create backups as a part of your regular environment maintenance.
In addition, you must back up the control plane before performing updates or upgrades. You can use the backups to restore the control plane to its previous state if an error occurs during an update or upgrade.
15.2.1. Supported backup formats and protocols Copy linkLink copied to clipboard!
The backup and restore process uses the open-source tool Relax-and-Recover (ReaR) to create and restore bootable backup images. ReaR is written in Bash and supports multiple image formats and multiple transport protocols.
The following list shows the backup formats and protocols that Red Hat OpenStack Platform supports when you use ReaR to back up and restore a director Operator deployed control plane.
- Bootable media formats
- ISO
- File transport protocols
- SFTP
- NFS
15.2.2. Configuring the backup storage location Copy linkLink copied to clipboard!
You can install and configure an NFS server to store the backup file. Before you create a backup of the control plane, configure the backup storage location in the bar-vars.yaml environment file. This file stores the key-value parameters that you want to pass to the backup execution.
- If you previously installed and configured an NFS or SFTP server, you do not need to complete this procedure. You enter the server information when you set up ReaR on the node that you want to back up.
-
By default, the Relax-and-Recover (ReaR) IP address parameter for the NFS server is
192.168.24.1. You must add the parametertripleo_backup_and_restore_serverto set the IP address value that matches your environment.
Procedure
Create an NFS backup directory on your workstation:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
bar-vars.yamlfile on your workstation:touch /home/stack/bar-vars.yaml
$ touch /home/stack/bar-vars.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow In the
bar-vars.yamlfile, configure the backup storage location:tripleo_backup_and_restore_server: <ip_address> tripleo_backup_and_restore_shared_storage_folder: <backup_dir>
tripleo_backup_and_restore_server: <ip_address> tripleo_backup_and_restore_shared_storage_folder: <backup_dir>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<ip_address>with the IP address of your NFS server, for example,172.22.0.1. The default IP address is192.168.24.1 -
Replace
<backup_dir>with the location of the backup storage folder, for example,/home/nfs/backup.
-
Replace
15.2.3. Performing a backup of the control plane Copy linkLink copied to clipboard!
To create a backup of the control plane, you must install and configure Relax-and-Recover (ReaR) on each of the Controller virtual machines (VMs).
Due to a known issue, the ReaR backup of overcloud nodes continues even if a Controller node is down. Ensure that all your Controller nodes are running before you run the ReaR backup. A fix is planned for a later Red Hat OpenStack Platform (RHOSP) release. For more information, see BZ#2077335 - Back up of the overcloud ctlplane keeps going even if one controller is unreachable.
Procedure
Extract the static Ansible inventory file from the location in which it was saved during installation:
oc rsh openstackclient cd find . -name tripleo-ansible-inventory.yaml cp ~/overcloud-deploy/<stack>/tripleo-ansible-inventory.yaml .
$ oc rsh openstackclient $ cd $ find . -name tripleo-ansible-inventory.yaml $ cp ~/overcloud-deploy/<stack>/tripleo-ansible-inventory.yaml .Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<stack>with the name of your stack, for example,cloud-admin. By default, the name of the stack isovercloud.
-
Replace
Install ReaR on each Controller virtual machine (VM):
openstack overcloud backup --setup-rear --extra-vars /home/cloud-admin/bar-vars.yaml --inventory /home/cloud-admin/tripleo-ansible-inventory.yaml
$ openstack overcloud backup --setup-rear --extra-vars /home/cloud-admin/bar-vars.yaml --inventory /home/cloud-admin/tripleo-ansible-inventory.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Open the
/etc/rear/local.conffile on each Controller VM :ssh controller-0
$ ssh controller-0 [cloud-admin@controller-0 ~]$ sudo -i [root@controller-0 ~]# cat >>/etc/rear/local.conf<<EOFCopy to Clipboard Copied! Toggle word wrap Toggle overflow In the
/etc/rear/local.conffile, add theNETWORKING_PREPARATION_COMMANDSparameter to configure the Controller VM networks in the following format:NETWORKING_PREPARATION_COMMANDS=('<command_1>' '<command_2>' ...'<command_n>')NETWORKING_PREPARATION_COMMANDS=('<command_1>' '<command_2>' ...'<command_n>')Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<command_1>,<command_2>, and all commands up to<command_n>, with commands that configure the network interface names or IP addresses. For example, you can add theip link add br-ctlplane type bridgecommand to configure the control plane bridge name or add theip link set eth0 upcommand to set the name of the interface. You can add more commands to the parameter based on your network configuration.
-
Replace
Repeat the following command on each Controller VM to back up their
config-drivepartitions:dd if=/dev/vda1 of=/mnt/config-drive
[root@controller-0 ~]# dd if=/dev/vda1 of=/mnt/config-driveCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a backup of the Controller VMs:
oc rsh openstackclient openstack overcloud backup --inventory /home/cloud-admin/tripleo-ansible-inventory.yaml
$ oc rsh openstackclient $ openstack overcloud backup --inventory /home/cloud-admin/tripleo-ansible-inventory.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow The backup process runs sequentially on each Controller VM without disrupting the service to your environment.
NoteYou cannot use cron to schedule backups because cron cannot be used on the
openstackclientpod.
15.2.4. Restoring the control plane Copy linkLink copied to clipboard!
If an error occurs during an update or upgrade, you can restore the control plane to its previous state by using the backup ISO image that you created using the Relax-and-Recover (ReaR) tool.
To restore the control plane, you must restore all Controller virtual machines (VMs) to ensure state consistency.
You can find the backup ISO images on the backup node.
Red Hat supports backups of Red Hat OpenStack Platform with native SDNs, such as Open vSwitch (OVS) and the default Open Virtual Network (OVN). For information about third-party SDNs, refer to the third-party SDN documentation.
Prerequisites
- You have created a backup of the control plane nodes.
- You have access to the backup node.
-
A
vncviewerpackage is installed on the workstation.
Procedure
Power off each Controller VM. Ensure that all the Controller VMs are powered off completely:
oc get vm
$ oc get vmCopy to Clipboard Copied! Toggle word wrap Toggle overflow Upload the backup ISO images for each Controller VM into a cluster PVC:
virtctl image-upload pvc <backup_image> \ --pvc-size=<pvc_size> \ --image-path=<image_path> \ --insecure
$ virtctl image-upload pvc <backup_image> \ --pvc-size=<pvc_size> \ --image-path=<image_path> \ --insecureCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<backup_image>with name of the PVC backup image for the Controller VM. For example,backup-controller-0-202310231141. -
Replace
<pvc_size>with the size of PVC required for the image specified with the--image-pathoption. For example,4G. -
Replace
<image_path>with the path to the backup ISO image for the Controller VM. For example,/home/nfs/backup/controller-0/controller-0-202310231141.iso.
-
Replace
Disable the director Operator by changing its replicas to
0:oc patch csv -n openstack <csv> --type json -p="[{"op": "replace", "path": "/spec/install/spec/deployments/0/spec/replicas", "value": "0"}]"$ oc patch csv -n openstack <csv> --type json -p="[{"op": "replace", "path": "/spec/install/spec/deployments/0/spec/replicas", "value": "0"}]"Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<csv>with the CSV from the environment, for example,osp-director-operator.v1.3.1.
-
Replace
Verify that the
osp-director-operator-controller-managerpod is stopped:oc pod osp-director-operator-controller-manager
$ oc pod osp-director-operator-controller-managerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a backup of each Controller VM resource:
oc get vm controller-0 -o yaml > controller-0-bk.yaml
$ oc get vm controller-0 -o yaml > controller-0-bk.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the Controller VM resource with
bootOrderset to1and attach the uploaded PVC as a CD-ROM:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<backup_image>with name of the PVC backup image uploaded for the Controller VM in step 2. For example,backup-controller-0-202310231141.
-
Replace
Start each Controller VM:
virtctl start controller-0
$ virtctl start controller-0Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Wait until the status of each Controller VM is
RUNNING. Connect to each Controller VM by using VNC:
virtctl vnc controller-0
$ virtctl vnc controller-0Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you are using SSH to access the Red Hat OpenShift Container Platform (RHOCP) CLI on a remote system, ensure the SSH X11 forwarding is correctly configured. For more information, see the Red Hat Knowledgebase solution How do I configure X11 forwarding over SSH in Red Hat Enterprise Linux?.
-
ReaR starts automatic recovery after a timeout by default. If recovery does not start automically, you can manually select the
Recoveroption from theRelax-and-Recoverboot menu and specify the name of the control plane node to recover. Wait until the recovery is finished. When the control plane node restoration process completes, the console displays the following message:
Finished recovering your system Exiting rear recover Running exit tasks
Finished recovering your system Exiting rear recover Running exit tasksCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Enter the recovery shell as root.
When the command line console is available, restore the
config-drivepartition of each control plane node:once completed, restore the config-drive partition (which is ISO9660)
# once completed, restore the config-drive partition (which is ISO9660) RESCUE <control_plane_node>:~ $ dd if=/mnt/local/mnt/config-drive of=<config_drive_partition>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Power off each node:
RESCUE <control_plane_node>:~ # poweroff
$ RESCUE <control_plane_node>:~ # poweroffCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Update the Controller VM resource and deattach the CD-ROM. Make sure the rootDisk has
bootOrder: 1. Enable the director Operator by changing its replicas to
1:oc patch csv -n openstack <csv> --type json -p="[{"op": "replace", "path": "/spec/install/spec/deployments/0/spec/replicas", "value": "1"}]"$ oc patch csv -n openstack <csv> --type json -p="[{"op": "replace", "path": "/spec/install/spec/deployments/0/spec/replicas", "value": "1"}]"Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Verify that the
osp-director-operator-controller-managerpod is started. Start each Controller VM:
virtctl start controller-0 virtctl start controller-1 virtctl start controller-2
$ virtctl start controller-0 $ virtctl start controller-1 $ virtctl start controller-2Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until the Controller VMs are running. SELinux is relabelled on first boot.
Check the cluster status:
pcs status
$ pcs statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the Galera cluster does not restore as part of the restoration procedure, you must restore Galera manually. For more information, see Restoring the Galera cluster manually.
Chapter 16. Change resources on virtual machines using director Operator Copy linkLink copied to clipboard!
To change the CPU, RAM, and disk resources of an OpenStackVMSet custom resource (CR), use the OpenStackControlPlane CRD.
16.1. Change the CPU or RAM of an OpenStackVMSet CR Copy linkLink copied to clipboard!
You can use the OpenStackControlPlane CRD to change the CPU or RAM of an OpenStackVMSet custom resource (CR).
Procedure
Change the number of Controller virtualMachineRole cores to 8:
oc patch -n openstack osctlplane overcloud --type='json' -p='[{"op": "add", "path": "/spec/virtualMachineRoles/controller/cores", "value": 8 }]'$ oc patch -n openstack osctlplane overcloud --type='json' -p='[{"op": "add", "path": "/spec/virtualMachineRoles/controller/cores", "value": 8 }]'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change the Controller virtualMachineRole RAM size to 22GB:
oc patch -n openstack osctlplane overcloud --type='json' -p='[{"op": "add", "path": "/spec/virtualMachineRoles/controller/memory", "value": 22 }]'$ oc patch -n openstack osctlplane overcloud --type='json' -p='[{"op": "add", "path": "/spec/virtualMachineRoles/controller/memory", "value": 22 }]'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Validate the virtualMachineRole resource:
oc get osvmset
$ oc get osvmset NAME CORES RAM DESIRED READY STATUS REASON controller 8 22 1 1 Provisioned All requested VirtualMachines have been provisionedCopy to Clipboard Copied! Toggle word wrap Toggle overflow - From inside the virtual machine do a graceful shutdown. Shutdown each updated virtual machine one by one.
Power on the virtual machine:
`virtctl start <VM>` to power on the virtual machine.
$ `virtctl start <VM>` to power on the virtual machine.Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<VM>with the name of your virtual machine.
-
Replace
16.2. Add additional disks to an OpenStackVMSet CR Copy linkLink copied to clipboard!
You can use the OpenStackControlPlane CRD to add additional disks to a virtual machine by editing the additionalDisks property.
Procedure
Add or update the
additionalDisksparameter in theOpenStackControlPlaneobject:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the patch:
oc patch -n openstack osctlplane overcloud --patch-file controller_add_data_disk1.yaml
$ oc patch -n openstack osctlplane overcloud --patch-file controller_add_data_disk1.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Validate the virtualMachineRole resource:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - From inside the virtual machine do a graceful shutdown. Shutdown each updated virtual machine one by one.
Power on the virtual machine:
`virtctl start <VM>` to power on the virtual machine.
$ `virtctl start <VM>` to power on the virtual machine.Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<VM>with the name of your virtual machine.
-
Replace
Chapter 17. Airgapped environment Copy linkLink copied to clipboard!
An air-gapped environment ensures security by physically isolating it from other networks and systems. You can install director Operator in an air-gapped environment to ensure security and provides certain regulatory requirements.
17.1. Prerequisites Copy linkLink copied to clipboard!
An operational Red Hat Openshift Container Platform (RHOCP) cluster, version 4.12, 4.14, or 4.16. The cluster must contain a
provisioningnetwork, and the following Operators:-
A
baremetalcluster Operator. Thebaremetalcluster Operator must be enabled. For more information onbaremetalcluster Operators, see Bare-metal cluster Operators. - OpenShift Virtualization Operator. For more information on installing the OpenShift Virtualization Operator, see Installing OpenShift Virtualization using the web console.
- SR-IOV Network Operator.
-
A
- You have a disconnected registry adhering to docker v2 schema. For more information, see Mirroring images for a disconnected installation.
- You have access to a Satellite server or any other repository used to register the overcloud nodes and install packages.
- You have access to a local git repository to store deployment artifacts.
The following command line tools are installed on your workstation:
-
podman -
skopeo -
oc -
jq
-
17.2. Configuring an airgapped environment Copy linkLink copied to clipboard!
To configure an airgapped environment, you must have access to both registry.redhat.io and the registry for airgapped environment. For more information on how to access both registries, see Mirroring catalog contents to airgapped registries.
Procedure
Create the
openstacknamespace:oc new-project openstack
$ oc new-project openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the index image and push it to your registry:
podman login registry.redhat.io podman login your.registry.local BUNDLE_IMG="registry.redhat.io/rhosp-rhel9/osp-director-operator-bundle@sha256:<bundle digest>" INDEX_IMG="quay.io/<account>/osp-director-operator-index:x.y.z-a" opm index add --bundles ${BUNDLE_IMG} --tag ${INDEX_IMG} -u podman --pull-tool podman$ podman login registry.redhat.io $ podman login your.registry.local $ BUNDLE_IMG="registry.redhat.io/rhosp-rhel9/osp-director-operator-bundle@sha256:<bundle digest>" $ INDEX_IMG="quay.io/<account>/osp-director-operator-index:x.y.z-a" $ opm index add --bundles ${BUNDLE_IMG} --tag ${INDEX_IMG} -u podman --pull-tool podmanCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou can get the latest bundle image from: Certified container images. Search for
osp-director-operator-bundle.Retrieve the digest of the index image you created in the previous step:
INDEX_DIGEST="$(skopeo inspect docker://quay.io/<account>/osp-director-operator-index:x.y.z-a | jq '.Digest' -r)"
$ INDEX_DIGEST="$(skopeo inspect docker://quay.io/<account>/osp-director-operator-index:x.y.z-a | jq '.Digest' -r)"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Mirror the relevant images based on the operator index image:
oc adm catalog mirror quay.io/<account>/osp-director-operator-index@${INDEX_DIGEST} your.registry.local --insecure --index-filter-by-os='Linux/x86_64'$ oc adm catalog mirror quay.io/<account>/osp-director-operator-index@${INDEX_DIGEST} your.registry.local --insecure --index-filter-by-os='Linux/x86_64'Copy to Clipboard Copied! Toggle word wrap Toggle overflow After mirroring is complete, a
manifestsdirectory is generated in your current directory calledmanifests-osp-director-operator-index-<random_number>. Apply the created ImageContentSourcePolicy to your cluster:oc apply -f manifests-osp-director-operator-index-<random_number>/imageContentSourcePolicy.yaml
$ oc apply -f manifests-osp-director-operator-index-<random_number>/imageContentSourcePolicy.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<random_number>with the randomly generated number.
-
Replace
Create a file named
osp-director-operator.yamland include the following YAML content to configure the three resources required to install director Operator:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the new resources in the
openstacknamespace:oc apply -f osp-director-operator.yaml
$ oc apply -f osp-director-operator.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the required overcloud images to the repository:
for i in $(podman search --limit 1000 "registry.redhat.io/rhosp-rhel9" --format="{{ .Name }}" | awk '{print $1 ":" "<rhosp_version>"}' | awk -F "/" '{print $2 "/" $3}'); do skopeo copy --all docker://registry.redhat.io/$i docker://your.registry.local/$i;done$ for i in $(podman search --limit 1000 "registry.redhat.io/rhosp-rhel9" --format="{{ .Name }}" | awk '{print $1 ":" "<rhosp_version>"}' | awk -F "/" '{print $2 "/" $3}'); do skopeo copy --all docker://registry.redhat.io/$i docker://your.registry.local/$i;doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<rhosp_version>with the version of RHOSP that you are using, for example,17.1.5.
NoteYou can refer to Preparing a Satellite server for container images if Red Hat Satellite is used as the local registry.
-
Replace
- You can now proceed with Installing and preparing director Operator.
Verification
Confirm that you have successfully installed director Operator:
oc get operators
$ oc get operators NAME AGE osp-director-operator.openstack 5mCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 18. Upgrading an overcloud on a Red Hat OpenShift Container Platform cluster with director Operator (16.2 to 17.1) Copy linkLink copied to clipboard!
You can upgrade your Red Hat OpenStack Platform (RHOSP) 16.2 overcloud to a RHOSP 17.1 overcloud with director Operator (OSPdO) by using the in-place framework for upgrades (FFU) workflow.
To perform an upgrade, you must perform the following tasks:
- Prepare your environment for the upgrade.
-
Update custom
roles_datafiles to the composable services supported by RHOSP 17.1. -
Optional: Upgrade Red Hat Ceph Storage and adopt
cephadm. - Upgrade the overcloud nodes to run RHOSP 17.1 containers on RHEL 8.
- Upgrade the overcloud nodes to run RHOSP 17.1 containers on RHEL 9.
- Perform post-upgrade tasks.
18.1. Prerequisites Copy linkLink copied to clipboard!
- You are using the latest version of OSPdO.
- The overcloud deployment is running RHOSP version 16.2.4 or later. If your overcloud deployment is running a RHOSP version that is earlier than 16.2.4, you must update the environment to the latest minor version of your current release. For information about how to perform a minor update, see Performing a minor update of the RHOSP overcloud with director Operator.
-
The minimum kernel version running on the overcloud nodes is
kernel-4.18.0-305.41.1.el8.
18.2. Updating director Operator Copy linkLink copied to clipboard!
You must update your director Operator (OSPdO) to the latest 17.1 version before performing the overcloud upgrade. To update OSPdO, you must first delete and reinstall the current OSPdO. To delete OSPdO, you delete the OSPdO subscription and CSV.
Procedure
Check the current version of the director Operator in the
currentCSVfield:oc get subscription osp-director-operator-subscription -n openstack -o yaml | grep currentCSV
$ oc get subscription osp-director-operator-subscription -n openstack -o yaml | grep currentCSVCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the CSV for the director Operator in the target namespace:
oc delete clusterserviceversion <current_CSV> -n openstack
$ oc delete clusterserviceversion <current_CSV> -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<current_CSV>with thecurrentCSVvalue from step 1.
-
Replace
Delete the subscription:
oc delete subscription osp-director-operator.openstack -n openstack
$ oc delete subscription osp-director-operator.openstack -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Install the latest 17.1 director Operator. For information, see Installing director Operator.
18.3. Preparing your director Operator environment for upgrade Copy linkLink copied to clipboard!
You must prepare your director Operator (OSPdO) deployed Red Hat OpenStack Platform (RHOSP) environment for the upgrade to RHOSP 17.1.
Procedure
Set
openStackReleaseto 17.1 on theopenstackcontrolplaneCR:oc patch openstackcontrolplane -n openstack overcloud --type=json -p="[{'op': 'replace', 'path': '/spec/openStackRelease', 'value': '17.1'}]"$ oc patch openstackcontrolplane -n openstack overcloud --type=json -p="[{'op': 'replace', 'path': '/spec/openStackRelease', 'value': '17.1'}]"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the OSPdO
ClusterServiceVersion(csv) CR:oc get csv -n openstack
$ oc get csv -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete all instances of the
OpenStackConfigGeneratorCR:oc delete -n openstack openstackconfiggenerator --all
$ oc delete -n openstack openstackconfiggenerator --allCopy to Clipboard Copied! Toggle word wrap Toggle overflow If your deployment includes HCI, the adoption from
ceph-ansibletocephadmmust be performed using the RHOSP 17.1 on RHEL8openstackclientimage:oc patch openstackclient -n openstack openstackclient --type=json -p="[{'op': 'replace', 'path': '/spec/imageURL', 'value': 'registry.redhat.io/rhosp-rhel8/openstack-tripleoclient:17.1'}]"$ oc patch openstackclient -n openstack openstackclient --type=json -p="[{'op': 'replace', 'path': '/spec/imageURL', 'value': 'registry.redhat.io/rhosp-rhel8/openstack-tripleoclient:17.1'}]"Copy to Clipboard Copied! Toggle word wrap Toggle overflow If your deployment does not include HCI, or the
cephadmadoption has already been completed, then switch to the 17.1 OSPdO defaultopenstackclientimage by removing the currentimageURLfrom theopenstackclientCR:oc patch openstackclient -n openstack openstackclient --type=json -p="[{'op': 'remove', 'path': '/spec/imageURL'}]"$ oc patch openstackclient -n openstack openstackclient --type=json -p="[{'op': 'remove', 'path': '/spec/imageURL'}]"Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you have enabled fencing in the overcloud, you must temporarily disable fencing on one of the Controller nodes for the duration of the upgrade:
oc rsh -n openstack openstackclient ssh controller-0.ctlplane "sudo pcs property set stonith-enabled=false"
$ oc rsh -n openstack openstackclient $ ssh controller-0.ctlplane "sudo pcs property set stonith-enabled=false"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
18.4. Updating composable services in custom roles_data files Copy linkLink copied to clipboard!
You must update your roles_data files to the supported Red Hat OpenStack Platform (RHOSP) 17.1 composable services. For more information, see Updating composable services in custom roles_data files in the Framework for Upgrades (16.2 to 17.1) guide.
Procedure
Remove the following services from all roles:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Add the
OS::TripleO::Services::GlanceApiInternalservice to your Controller role. -
Update the
OS::TripleO::Services::NovaLibvirtservice on the Compute roles toOS::TripleO::Services::NovaLibvirtLegacy. -
If your environment includes Red Hat Ceph Storage, set the
DeployedCephparameter tofalseto enable director-managedcephadmdeployments. If your network configuration templates include the following functions, you must manually convert your NIC templates to Jinja2 Ansible format before you upgrade the overcloud. The following functions are not supported with automatic conversion:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more information about manually converting your NIC templates, see Manually converting NIC templates to Jinja2 Ansible format in Installing and managing Red Hat OpenStack Platform with director.
18.5. Upgrading Red Hat Ceph Storage and adopting cephadm Copy linkLink copied to clipboard!
If your environment includes Red Hat Ceph Storage deployments, you must upgrade your deployment to Red Hat Ceph Storage 5. With an upgrade to version 5, cephadm now manages Red Hat Ceph Storage instead of ceph-ansible.
Procedure
-
Create an Ansible playbook file named
ceph-admin-user-playbook.yamlto create aceph-adminuser on the overcloud nodes. Add the following configuration to the
ceph-admin-user-playbook.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the playbook to the
openstackclientcontainer:oc cp -n openstack ceph-admin-user-playbook.yml openstackclient:/home/cloud-admin/ceph-admin-user-playbook.yml
$ oc cp -n openstack ceph-admin-user-playbook.yml openstackclient:/home/cloud-admin/ceph-admin-user-playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook on the
openstackclientcontainer:oc rsh -n openstack openstackclient ansible-playbook -i /home/cloud-admin/ctlplane-ansible-inventory -e tripleo_admin_user=ceph-admin -e distribute_private_key=true /home/cloud-admin/ceph-admin-user-playbook.yml
$ oc rsh -n openstack openstackclient $ ansible-playbook -i /home/cloud-admin/ctlplane-ansible-inventory -e tripleo_admin_user=ceph-admin -e distribute_private_key=true /home/cloud-admin/ceph-admin-user-playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the Red Hat Ceph Storage container image parameters in the
containers-prepare-parameter.yamlfile for the version of Red Hat Ceph Storage that your deployment uses:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<ceph_image_file>with the name of the image file for the version of Red Hat Ceph Storage that your deployment uses:-
Red Hat Ceph Storage 5:
rhceph-5-rhel8
-
Red Hat Ceph Storage 5:
Replace
<grafana_image_file>with the name of the image file for the version of Red Hat Ceph Storage that your deployment uses:-
Red Hat Ceph Storage 5:
rhceph-5-dashboard-rhel8
-
Red Hat Ceph Storage 5:
-
If your deployment includes HCI, update the
CephAnsibleRepoparameter incompute-hci.yamlto "rhelosp-ceph-5-tools". Create an environment file named
upgrade.yamland add the following configuration to it:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Create a new
OpenStackConfigGeneratorCR namedceph-upgradethat includes the updated environment file and tripleo-tarball ConfigMaps. Create a file named
openstack-ceph-upgrade.yamlon your workstation to define anOpenStackDeployCR for the upgrade from Red Hat Ceph Storage 4 to 5:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Save the
openstack-ceph-upgrade.yamlfile. Create the
OpenStackDeployresource:oc create -f openstack-ceph-upgrade.yaml -n openstack
$ oc create -f openstack-ceph-upgrade.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait for the deployment to finish.
Create a file named
openstack-ceph-upgrade-packages.yamlon your workstation to define anOpenStackDeployCR that upgrades the Red Hat Ceph Storage packages:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Save the
openstack-ceph-upgrade-packages.yamlfile. Create the
OpenStackDeployresource:oc create -f openstack-ceph-upgrade-packages.yaml -n openstack
$ oc create -f openstack-ceph-upgrade-packages.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait for the deployment to finish.
Create a file named
openstack-ceph-upgrade-to-cephadm.yamlon your workstation to define anOpenStackDeployCR that runs thecephadmadoption:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Save the
openstack-ceph-upgrade-to-cephadm.yamlfile. Create the
OpenStackDeployresource:oc create -f openstack-ceph-upgrade-to-cephadm.yaml -n openstack
$ oc create -f openstack-ceph-upgrade-to-cephadm.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait for the deployment to finish.
Update the
openstackclientimage to the RHEL9 container image by removing the currentimageURLfrom theopenstackclientCR:oc patch openstackclient -n openstack openstackclient --type=json -p="[{'op': 'remove', 'path': '/spec/imageURL'}]"$ oc patch openstackclient -n openstack openstackclient --type=json -p="[{'op': 'remove', 'path': '/spec/imageURL'}]"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
18.6. Upgrading the overcloud to RHOSP17.1 on RHEL8 Copy linkLink copied to clipboard!
To upgrade the overcloud nodes to run RHOSP 17.1 containers on RHEL 8 you must update the container preparation file, which is the file that contains the ContainerImagePrepare parameter. You use this file to define the rules for obtaining container images for the overcloud.
You must update your container preparation file for both RHEL 8 and RHEL 9 hosts:
- RHEL 9 hosts: All containers are based on RHEL9.
-
RHEL 8 hosts: All containers are based on RHEL9 except for
libvirtandcollectd. Thelibvirtandcollectdcontainers must use the same base as the host.
You must then generate a new OpenStackConfigGenerator CR before deploying the updates.
Procedure
-
Open the container preparation file,
containers-prepare-parameter.yaml, and check that it obtains the correct image versions. Add the
ContainerImagePrepareRhel8parameter tocontainers-prepare-parameter.yaml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Create an environment file named
upgrade.yaml. Add the following configuration to the
upgrade.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Create an environment file named
disable_compute_service_check.yaml. Add the following configuration to the
disable_compute_service_check.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
If your deployment includes HCI, update the Red Hat Ceph Storage and HCI parameters from
ceph-ansiblevalues in RHOSP 16.2 tocephadmvalues in RHOSP 17.1. For more information, see Custom environment file for configuring Hyperconverged Infrastructure (HCI) storage in director Operator. Create a file named
openstack-configgen-upgrade.yamlon your workstation that defines a newOpenStackConfigGeneratorCR named "upgrade":Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a file named
openstack-upgrade.yamlon your workstation to create anOpenStackDeployCR for the overcloud upgrade:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Save the
openstack-upgrade.yamlfile. Create the
OpenStackDeployresource:oc create -f openstack-upgrade.yaml -n openstack
$ oc create -f openstack-upgrade.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait for the deployment to finish. The overcloud nodes are now running 17.1 containers on RHEL8.
18.7. Upgrading the overcloud to RHEL 9 Copy linkLink copied to clipboard!
To upgrade the overcloud nodes to run RHOSP 17.1 containers on RHEL 9, you must update the container preparation file, which is the file that contains the ContainerImagePrepare parameter. You use this file to define the rules for obtaining container images for the overcloud. You must then generate a new OpenStackConfigGenerator CR before deploying the updates.
Procedure
-
Open the container preparation file,
containers-prepare-parameter.yamland check that it obtains the correct image versions. Remove the following role specific overrides from the
containers-prepare-paramater.yamlfile:ControllerContainerImagePrepare: *container_image_prepare_rhel8 ComputeContainerImagePrepare: *container_image_prepare_rhel8
ControllerContainerImagePrepare: *container_image_prepare_rhel8 ComputeContainerImagePrepare: *container_image_prepare_rhel8Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Open the
roles_data.yamlfile and replaceOS::TripleO::Services::NovaLibvirtLegacywithOS::TripleO::Services::NovaLibvirt. Create an environment file named
skip_rhel_release.yaml, and add the following configuration:parameter_defaults: SkipRhelEnforcement: false
parameter_defaults: SkipRhelEnforcement: falseCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create an environment file named
system_upgrade.yamland add the following configuration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more information on the recommended Leapp parameters, see Upgrade parameters in the Framework for upgrades (16.2 to 17.1) guide.
-
Create a new
OpenStackConfigGeneratorCR namedsystem-upgradethat includes the updated heat environment and tripleo tarball ConfigMaps. Create a file named
openstack-controller0-upgrade.yamlon your workstation to define anOpenStackDeployCR for the first controller node:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Save the
openstack-controller0-upgrade.yamlfile. Create the
OpenStackDeployresource to run the system upgrade on Controller 0:oc create -f openstack-controller0-upgrade.yaml -n openstack
$ oc create -f openstack-controller0-upgrade.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait for the deployment to finish.
Create a file named
openstack-controller1-upgrade.yamlon your workstation to define anOpenStackDeployCR for the second controller node:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Save the
openstack-controller1-upgrade.yamlfile. Create the
OpenStackDeployresource to run the system upgrade on Controller 1:oc create -f openstack-controller1-upgrade.yaml -n openstack
$ oc create -f openstack-controller1-upgrade.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait for the deployment to finish.
Create a file named
openstack-controller2-upgrade.yamlon your workstation to define anOpenStackDeployCR for the third controller node:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Save the
openstack-controller2-upgrade.yamlfile. Create the
OpenStackDeployresource to run the system upgrade on Controller 1:oc create -f openstack-controller2-upgrade.yaml -n openstack
$ oc create -f openstack-controller2-upgrade.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait for the deployment to finish.
Create a file named
openstack-computes-upgrade.yamlon your workstation to define anOpenStackDeployCR that upgrades all Compute nodes:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Save the
openstack-computes-upgrade.yamlfile. Create the
OpenStackDeployresource to run the system upgrade on the Compute nodes:oc create -f openstack-computes-upgrade.yaml -n openstack
$ oc create -f openstack-computes-upgrade.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait for the deployment to finish.
18.8. Performing post-upgrade tasks Copy linkLink copied to clipboard!
You must perform some post-upgrade tasks to complete the upgrade after the overcloud upgrades are successfully complete.
Procedure
-
Update the
baseImageUrlparameter to a RHEL 9.2 guest image in yourOpenStackProvisionServerCR andOpenStackBaremetalSetCR. Re-enable fencing on the controllers:
oc rsh -n openstack openstackclient ssh controller-0.ctlplane "sudo pcs property set stonith-enabled=true"
$ oc rsh -n openstack openstackclient $ ssh controller-0.ctlplane "sudo pcs property set stonith-enabled=true"Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Perform any other post-upgrade actions relevant to your environment. For more information, see Performing post-upgrade actions in the Framework for upgrades (16.2 to 17.1) guide.