Deploying an overcloud in a Red Hat OpenShift Container Platform cluster with director Operator
Using director Operator to deploy and manage a Red Hat OpenStack Platform overcloud in a Red Hat OpenShift Container Platform
Abstract
Support for Red Hat OpenStack Platform director Operator will only be granted if your architecture is approved by Red Hat Services or by a Technical Account Manager. Please contact Red Hat before deploying this feature.
Making open source more inclusive
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Providing feedback on Red Hat documentation
We appreciate your input on our documentation. Tell us how we can make it better.
Providing documentation feedback in Jira
Use the Create Issue form to provide feedback on the documentation for Red Hat OpenStack Services on OpenShift (RHOSO) or earlier releases of Red Hat OpenStack Platform (RHOSP). When you create an issue for RHOSO or RHOSP documents, the issue is recorded in the RHOSO Jira project, where you can track the progress of your feedback.
To complete the Create Issue form, ensure that you are logged in to Jira. If you do not have a Red Hat Jira account, you can create an account at https://issues.redhat.com.
- Click the following link to open a Create Issue page: Create Issue
- Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form.
- Click Create.
Chapter 1. Creating and deploying a RHOSP overcloud with director Operator
Red Hat OpenShift Container Platform (RHOCP) uses a modular system of Operators to extend the functions of your RHOCP cluster. Red Hat OpenStack Platform (RHOSP) director Operator (OSPdO) adds the ability to install and run a RHOSP cloud within RHOCP. OSPdO manages a set of Custom Resource Definitions (CRDs) that deploy and manage the infrastructure and configuration of RHOSP nodes. The basic architecture of an OSPdO-deployed RHOSP cloud includes the following features:
- Virtualized control plane
- The Controller nodes are virtual machines (VMs) that OSPdO creates in Red Hat OpenShift Virtualization.
- Bare-metal machine provisioning
- OSPdO uses RHOCP bare-metal machine management to provision the Compute nodes for the RHOSP cloud.
- Networking
- OSPdO configures the underlying networks for RHOSP services.
- Heat and Ansible-based configuration
-
OSPdO stores custom heat configuration in RHOCP and uses the
config-download
functionality in director to convert the configuration into Ansible playbooks. If you change the stored heat configuration, OSPdO automatically regenerates the Ansible playbooks. - CLI client
-
OSPdO creates an
openstackclient
pod for users to run RHOSP CLI commands and interact with their RHOSP cloud.
You can use the resources specific to OSPdO to provision your overcloud infrastructure, generate your overcloud configuration, and create an overcloud. To create a RHOSP overcloud with OSPdO, you must complete the following tasks:
- Install OSPdO on an operational RHOCP cluster.
- Create a RHOCP cluster data volume for the base operating system and add authentication details for your remote Git repository.
-
Create the overcloud networks using the
OpenStackNetConfig
CRD, including the control plane and any isolated networks. -
Create
ConfigMaps
to store any custom heat templates and environment files for your overcloud. - Create a control plane, which includes three virtual machines for Controller nodes and a pod to perform client operations.
- Create bare-metal Compute nodes.
-
Create an
OpenStackConfigGenerator
custom resource to render Ansible playbooks for overcloud configuration. -
Apply the Ansible playbook configuration to your overcloud nodes by using
openstackdeploy
.
1.1. Custom resource definitions for director Operator
The Red Hat OpenStack Platform (RHOSP) director Operator (OSPdO) includes a set of custom resource definitions (CRDs) that you can use to manage overcloud resources.
Use the following command to view a complete list of the OSPdO CRDs:
$ oc get crd | grep "^openstack"
Use the following command to view the definition for a specific CRD:
$ oc describe crd openstackbaremetalset Name: openstackbaremetalsets.osp-director.openstack.org Namespace: Labels: operators.coreos.com/osp-director-operator.openstack= Annotations: cert-manager.io/inject-ca-from: $(CERTIFICATE_NAMESPACE)/$(CERTIFICATE_NAME) controller-gen.kubebuilder.io/version: v0.3.0 API Version: apiextensions.k8s.io/v1 Kind: CustomResourceDefinition ...
Use the following command to view descriptions of the fields you can use to configure a specific CRD:
$ oc explain openstackbaremetalset.spec KIND: OpenStackBaremetalSet VERSION: osp-director.openstack.org/v1beta1 RESOURCE: spec <Object> DESCRIPTION: <empty> FIELDS: count <Object> baseImageUrl <Object> deploymentSSHSecret <Object> ctlplaneInterface <Object> networks <[]Object> ...
OSPdO includes two types of CRD: hardware provisioning and software configuration.
Hardware Provisioning CRDs
openstacknetattachment
(internal)-
Used by the OSPdO to manage the
NodeNetworkConfigurationPolicy
andNodeSriovConfigurationPolicy
CRDs, which are used to attach networks to virtual machines (VMs). openstacknetconfig
-
Use to specify
openstacknetattachment
andopenstacknet
CRDs that describe the full network configuration. The set of reserved IP and MAC addresses for each node are reflected in the status. openstackbaremetalset
- Use to create sets of bare-metal hosts for specific RHOSP roles, such as "Compute" and "Storage".
openstackcontrolplane
-
Use to create the RHOSP control plane and manage associated
openstackvmset
CRs. openstacknet
(internal)-
Use to create networks that are used to assign IPs to the
openstackvmset
andopenstackbaremetalset
CRs. openstackipset
(internal)- Contains a set of IPs for a given network and role. Used by the OSPdO to manage IP addresses.
openstackprovisionservers
- Use to serve custom images for provisioning bare-metal nodes with Metal3.
openstackvmset
- Use to create sets of OpenShift Virtualization VMs for a specific RHOSP role, such as "Controller", "Database", or "NetworkController".
Software Configuration CRDs
openstackconfiggenerator
-
Use to automatically generate Ansible playbooks for deployment when you scale up or make changes to custom
ConfigMaps
for deployment. openstackconfigversion
- Use to represent a set of executable Ansible playbooks.
openstackdeploy
-
Use to execute the set of Ansible playbooks defined in the
openstackconfigversion
CR. openstackclient
- Creates a pod used to run RHOSP deployment commands.
Additional resources
1.2. CRD naming conventions
Each custom resource definition (CRD) can have multiple names defined with the spec.names
parameter. Which name you use depends on the context of the action you perform:
Use
kind
when you create and interact with resource manifests:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackBaremetalSet ...
The
kind
name in the resource manifest correlates to thekind
name in the respective CRD.Use
plural
when you interact with multiple resources:$ oc get openstackbaremetalsets
Use
singular
when you interact with a single resource:$ oc describe openstackbaremetalset/compute
Use
shortName
for any CLI interactions:$ oc get osbmset
1.3. Features not supported by director Operator
- Fiber Channel back end
-
Block Storage (cinder) image-to-volume is not supported for back ends that use Fiber Channel. Red Hat OpenShift Virtualization does not support N_Port ID Virtualization (NPIV). Therefore, Block Storage drivers that need to map LUNs from a storage back end to the controllers, where
cinder-volume
runs by default, do not work. You must create a dedicated role forcinder-volume
and use the role to create physical nodes instead of including it on the virtualized controllers. For more information, see Composable services and custom roles in the Customizing your Red Hat OpenStack Platform deployment guide. - Role-based Ansible playbooks
-
Director Operator (OSPdO) does not support running Ansible playbooks to configure role-based node attributes after the bare-metal nodes are provisioned. This means that you cannot use the
role_growvols_args
extra Ansible variable to configure whole disk partitions for the Object Storage service (swift). Role-based Ansible playbook configuration only applies to bare-metal nodes that are provisioned by using a node definition file. - Migration of workloads from Red Hat Virtualization to OSPdO
- You cannot migrate workloads from a Red Hat Virtualization environment to an OSPdO environment.
- Using a VLAN for the control plane network
-
TripleO does not support using a VLAN for the control plane (
ctlplane
) network. - Multiple Compute cells
- You cannot add additional Compute cells to an OSPdO environment.
- BGP for the control plane
- BGP is not supported for the control plane in an OSPdO environment.
- PCI passthrough and attaching hardware devices to Controller VMs
- You cannot attach SRIOV devices and FC SAN Storage to Controller VMs.
1.4. Limitations with a director Operator deployment
A director Operator (OSPdO) environment has the following support limitations:
-
Single-stack IPv6 is not supported. Only IPv4 is supported on the
ctlplane
network. - You cannot create VLAN provider networks without dedicated networker nodes, because the NMState Operator cannot attach a VLAN trunk to the OSPdO Controller VMs. Therefore, to create VLAN provider networks, you must create dedicated Networker nodes on bare metal. For more information, see https://github.com/openstack/tripleo-heat-templates/blob/stable/wallaby/roles/Networker.yaml.
- You cannot remove the provisioning network.
- You cannot use a proxy for SSH connections to communicate with the Git repository.
- You cannot use HTTP or HTTPS to connect to the Git repository.
1.5. Recommendations for a director Operator deployment
- Storage class
- For back end performance, use low latency SSD/NVMe-backed storage to create the RWX/RWO storage class required by the Controller virtual machines (VMs), the client pod, and images.
1.6. Additional resources
Chapter 2. Installing and preparing director Operator
You install Red Hat OpenStack Platform (RHOSP) director Operator (OSPdO) on an existing operational Red Hat Openshift Container Platform (RHOCP) cluster. You perform the OSPdO installation tasks and all overcloud creation tasks on a workstation that has access to the RHOCP cluster. After you have installed OSPdO, you must create a data volume for the base operating system and add authentication details for your remote Git repository. You can also set the root password for your nodes. If you do not set a root password, you can still log into nodes with the SSH keys defined in the osp-controlplane-ssh-keys
Secret.
Support for Red Hat OpenStack Platform director Operator will only be granted if your architecture is approved by Red Hat Services or by a Technical Account Manager. Please contact Red Hat before deploying this feature.
2.1. Prerequisites
An operational Red Hat Openshift Container Platform (RHOCP) cluster, version 4.12, 4.14, or 4.16. The cluster must contain a
provisioning
network, and the following Operators:-
A
baremetal
cluster Operator. Thebaremetal
cluster Operator must be enabled. For more information onbaremetal
cluster Operators, see Bare-metal cluster Operators. - OpenShift Virtualization Operator. For more information on installing the OpenShift Virtualization Operator, see Installing OpenShift Virtualization using the web console.
- SR-IOV Network Operator.
Kubernetes NMState Operator. You must also create an NMState instance to finish installing all the NMState CRDs:
cat <<EOF | oc apply -f - apiVersion: nmstate.io/v1 kind: NMState metadata: name: nmstate namespace: openshift-nmstate EOF
For more information on installing the Kubernetes NMState Operator, see Installing the Kubernetes NMState Operator.
-
A
-
The
oc
command line tool is installed on your workstation. - A remote Git repository for OSPdO to store the generated configuration for your overcloud.
- An SSH key pair is generated for the Git repository and the public key is uploaded to the Git repository.
The following persistent volumes to fulfill the persistent volume claims that OSPdO creates:
-
4G for
openstackclient-cloud-admin
. -
1G for
openstackclient-hosts
. - 500G for the base image that OSPdO clones for each Controller virtual machine.
- A minimum of 50G for each Controller virtual machine. For more information, see Controller node requirements
-
4G for
2.2. Bare-metal cluster Operators
Red Hat Openshift Container Platform (RHOCP) clusters that you install with the installer-provisioned infrastructure (IPI) or assisted installation (AI) use the baremetal
platform type and have the baremetal
cluster Operator enabled. RHOCP clusters that you install with user-provisioned infrastructure (UPI) use the none
platform type and might have the baremetal
cluster Operator disabled.
If the cluster is of type AI or IPI, it uses metal3
, a Kubernetes API for the management of bare-metal hosts. It maintains an inventory of available hosts as instances of the BareMetalHost
custom resource definition (CRD). You can use the bare-metal Operator to perform the following tasks:
-
Inspect the host’s hardware details and report them to the corresponding
BareMetalHost
CR. This includes information about CPUs, RAM, disks, and NICs. - Provision hosts with a specific image.
- Clean a host’s disk contents before or after provisioning.
To check if the baremetal
cluster Operator is enabled, navigate to Administration > Cluster Settings > ClusterOperators > baremetal, scroll to the Conditions section, and view the Disabled status.
To check the platform type of the RHOCP cluster, navigate to Administration > Cluster Settings > Configuration > Infrastructure, switch to YAML view, scroll to the Conditions section, and view the status.platformStatus
value.
2.3. Installing director Operator
To install director Operator (OSPdO), you must create the openstack
project (namespace
) for OSPdO and create the following custom resources (CRs) within the project:
-
A
CatalogSource
, which identifies the index image to use for the OSPdO catalog. -
An
OperatorGroup
, which defines the Operator group for OSPdO and restricts OSPdO to a target namespace. -
A
Subscription
, which tracks changes in the OSPdO catalog.
Procedure
Create the OSPdO project:
$ oc new-project openstack
-
Obtain the latest
osp-director-operator-bundle
image from https://catalog.redhat.com/software/containers/search. -
Download the Operator Package Manager (
opm
) tool from https://console.redhat.com/openshift/downloads. Use the
opm
tool to create an index image:$ BUNDLE_IMG="registry.redhat.io/rhosp-rhel9/osp-director-operator-bundle:1.3.1" $ INDEX_IMG="quay.io/<account>/osp-director-operator-index:x.y.z-a" $ opm index add --bundles ${BUNDLE_IMG} --tag ${INDEX_IMG} -u podman --pull-tool podman
Push the index image to your registry:
$ podman push ${INDEX_IMG}
-
Create an environment file to configure the
CatalogSource
,OperatorGroup
, andSubscription
CRs required to install OSPdO, for example,osp-director-operator.yaml
. To configure the
CatalogSource
CR, add the following configuration toosp-director-operator.yaml
:apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: osp-director-operator-index namespace: openstack spec: sourceType: grpc image: quay.io/<account>/osp-director-operator-index:x.y.z-a
For information about how to apply the Quay authentication so that the Operator deployment can pull the image, see Accessing images for Operators from private registries.
To configure the
OperatorGroup
CR, add the following configuration toosp-director-operator.yaml
:--- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: "osp-director-operator-group" namespace: openstack spec: targetNamespaces: - openstack
To configure the
Subscription
CR, add the following configuration toosp-director-operator.yaml
:--- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: osp-director-operator-subscription namespace: openstack spec: config: env: - name: WATCH_NAMESPACE value: openstack,openshift-machine-api,openshift-sriov-network-operator source: osp-director-operator-index sourceNamespace: openstack name: osp-director-operator
Create the new
CatalogSource
,OperatorGroup
, andSubscription
CRs within theopenstack
namespace:$ oc apply -f osp-director-operator.yaml
Confirm that you have installed OSPdO,
osp-director-operator.openstack
, by listing the installed operators:$ oc get operators NAME AGE osp-director-operator.openstack 5m
2.4. Creating a data volume for the base operating system
You must create a data volume with the Red Hat OpenShift Container Platform (RHOCP) cluster to store the base operating system image for your Controller virtual machines (VMs). You use the baseImageVolumeName
parameter to specify this data volume when you create the OpenStackControlPlane
and OpenStackVmSet
custom resources.
Prerequisites
The
virtctl
client tool is installed on your workstation. To install this tool on a Red Hat Enterprise Linux (RHEL) workstation, use the following commands:$ sudo subscription-manager repos --enable=cnv-4.12-for-rhel-8-x86_64-rpms $ sudo dnf install -y kubevirt-virtctl
The
virt-customize
client tool is installed on your workstation. To install this tool on a RHEL workstation, use the following command:$ dnf install -y libguestfs-tools-c
Procedure
- Download a RHEL 9.2 QCOW2 image from the Product Download section of the Red Hat Customer Portal to your workstation.
Optional: Add a custom CA certificate:
$ sudo -s $ export LIBGUESTFS_BACKEND=direct $ virt-copy-in -a <local_path_to_image> <ca_certificate>.pem /etc/pki/ca-trust/source/anchors/
You might want to add a custom CA certificate to secure LDAP communication for the Identity service, or to communicate with any non-RHOSP system.
Create a script to customize the image to assign predictable network interface names:
#!/bin/bash set -eux if [ -e /etc/kernel/cmdline ]; then echo 'Updating /etc/kernel/cmdline' sed -i -e "s/^\(.*\)net\.ifnames=0\s*\(.*\)/\1\2/" /etc/kernel/cmdline fi source /etc/default/grub if grep -q "net.ifnames=0" <<< "$GRUB_CMDLINE_LINUX"; then echo 'Updating /etc/default/grub' sed -i -e "s/^\(GRUB_CMDLINE_LINUX=.*\)net\.ifnames=0\s*\(.*\)/\1\2/" /etc/default/grub fi if [ "$GRUB_ENABLE_BLSCFG" == "true" ]; then echo 'Fixing BLS entries' find /boot/loader/entries -type f -exec sed -i -e "s/^\(.*\)net\.ifnames=0\s*\(.*\)/\1\2/" {} \; fi # Always do this, on RHEL8 with BLS we still need it as the BLS entry uses $kernelopts from grubenv echo 'Running grub2-mkconfig' grub2-mkconfig -o /etc/grub2.cfg grub2-mkconfig -o /etc/grub2-efi.cfg rm -f /etc/sysconfig/network-scripts/ifcfg-ens* /etc/sysconfig/network-scripts/ifcfg-eth* update-ca-trust extract
Run the image customization script:
$ sudo -s $ export LIBGUESTFS_BACKEND=direct $ chmod 755 customize_image.sh $ virt-customize -a <local_path_to_image> --run customize_image.sh --truncate /etc/machine-id
Use
virtctl
to upload the image to OpenShift Virtualization:$ virtctl image-upload dv <datavolume_name> -n openstack \ --size=<size> --image-path=<local_path_to_image> \ --storage-class <storage_class> --access-mode <access_mode> --insecure
-
Replace
<datavolume_name>
with the name of the data volume, for example,openstack-base-img
. -
Replace
<size>
with the size of the data volume required for your environment, for example,500Gi
. The minimum size is 500GB. Replace
<storage_class>
with the required storage class from your cluster. Use the following command to retrieve the available storage classes:$ oc get storageclass
-
Replace
<access_mode>
with the access mode for the PVC. The default value isReadWriteOnce
.
-
Replace
Additional resources
2.5. Adding authentication details for your remote Git repository
Director Operator (OSPdO) stores rendered Ansible playbooks to a remote Git repository and uses this repository to track changes to the overcloud configuration. You can use any Git repository that supports SSH authentication. You must provide details for the Git repository as a Red Hat OpenShift Platform (RHOCP) Secret resource named git-secret
.
Prerequisites
- The private key of the SSH key pair for your OSPdO Git repository.
Procedure
Create the
git-secret
Secret resource:$ oc create secret generic <secret_name> -n <namespace> \ --from-file=git_ssh_identity=<path_to_private_SSH_key> \ --from-literal=git_url=<git_server_URL>
-
Replace
<secret_name>
with the name of the secret, in this case,git-secret
. -
Replace
<namespace>
with the name of the namespace to create the secret in, for example,openstack
. -
Replace
<path_to_private_SSH_key>
with the path to the private key to access the Git repository. -
Replace
<git_server_URL>
with the SSH URL of the git repository that stores the OSPdO configuration, for example,ssh://<user>@<server>:2202/repo.git
.
-
Replace
Verify that the Secret resource is created:
$ oc get secret/git-secret -n openstack
Additional resources
Next steps
2.6. Setting the root password for nodes
To access the root
user with a password on each node, you can set a root
password in a Secret
resource named userpassword
. Setting the root password for nodes is optional. If you do not set a root
password, you can still log into nodes with the SSH keys defined in the osp-controlplane-ssh-keys
Secret.
If you set the root password, you must use the passwordSecret
parameter to specify the name of this Secret
resource when you create OpenStackControlPlane
and OpenStackBaremetalSet
custom resources. The examples in this guide use the Secret
resource name userpassword
.
Procedure
Convert your chosen password to a base64 value:
$ echo -n "p@ssw0rd!" | base64 cEBzc3cwcmQh
ImportantThe
-n
option removes the trailing newline from the echo output.Create a file named
openstack-userpassword.yaml
on your workstation. Include the following resource specification for the Secret in the file:apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openstack data: NodeRootPassword: "<password>"
-
Replace
<secret_name>
with the name of this Secret resource, for example,userpassword
. -
Replace
<password>
with your base64 encoded password.
-
Replace
Create the
userpassword
Secret:$ oc create -f openstack-userpassword.yaml -n openstack
Additional resources
Next steps
Chapter 3. Creating networks with director Operator
To create networks and bridges on OpenShift Virtualization worker nodes and connect your virtual machines (VMs) to these networks, you define your OpenStackNetConfig
custom resource (CR) and specify all the subnets for the overcloud networks. You must create one control plane network for your overcloud. You can also optionally create additional networks to implement network isolation for your composable networks.
3.1. Creating an overcloud network with the OpenStackNetConfig
CRD
You must use the OpenStackNetConfig
CRD to define at least one control plane network for your overcloud. You can also optionally define VLAN networks to create network isolation for composable networks such as InternalAPI
, Storage
, and External
. Each network definition must include the IP address assignment, and the mapping information for the OpenStackNetAttachment
CRD. OpenShift Virtualization uses the network definition to attach any virtual machines (VMs) to the control plane and VLAN networks.
Use the following commands to view the OpenStackNetConfig
CRD definition and specification schema:
$ oc describe crd openstacknetconfig $ oc explain openstacknetconfig.spec
Procedure
-
Create a file named
openstacknetconfig.yaml
on your workstation. Add the following configuration to
openstacknetconfig.yaml
to create theOpenStackNetConfig
custom resource (CR):apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackNetConfig metadata: name: openstacknetconfig
Configure network attachment definitions for the bridges you require for your network. For example, add the following configuration to
openstacknetconfig.yaml
to create the RHOSP bridge network attachment definitionbr-osp
, and set thenodeNetworkConfigurationPolicy
option to create a Linux bridge:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackNetConfig metadata: name: openstacknetconfig spec: attachConfigurations: br-osp: 1 nodeNetworkConfigurationPolicy: nodeSelector: node-role.kubernetes.io/worker: "" desiredState: interfaces: - bridge: options: stp: enabled: false port: - name: enp6s0 2 description: Linux bridge with enp6s0 as a port name: br-osp 3 state: up type: linux-bridge mtu: 1500 4 # optional DnsServers list dnsServers: - 192.168.25.1 # optional DnsSearchDomains list dnsSearchDomains: - osptest.test.metalkube.org - some.other.domain # DomainName of the OSP environment domainName: osptest.test.metalkube.org
Optional: To use Jumbo Frames for a bridge, configure the bridge interface to use Jumbo Frames and update the value of
mtu
for the bridge:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackNetConfig metadata: name: openstacknetconfig spec: attachConfigurations: br-osp: nodeNetworkConfigurationPolicy: nodeSelector: node-role.kubernetes.io/worker: "" desiredState: interfaces: - bridge: options: stp: enabled: false port: - name: enp6s0 description: Linux bridge with enp6s0 as a port name: br-osp state: up type: linux-bridge mtu: 9000 1 - name: enp6s0 2 description: Configuring enp6s0 on workers type: ethernet state: up mtu: 9000 ...
Define each overcloud network. The following example creates a control plane network and an isolated network for
InternalAPI
traffic:spec: ... networks: - name: Control 1 nameLower: ctlplane 2 subnets: 3 - name: ctlplane 4 ipv4: 5 allocationEnd: 172.22.0.250 allocationStart: 172.22.0.100 cidr: 172.22.0.0/24 gateway: 172.22.0.1 attachConfiguration: br-osp 6 - name: InternalApi 7 nameLower: internal_api mtu: 1350 subnets: - name: internal_api attachConfiguration: br-osp vlan: 20 8 ipv4: allocationEnd: 172.17.0.250 allocationStart: 172.17.0.10 cidr: 172.17.0.0/24 ...
- 1
- The name of the network, for example,
Control
. - 2
- The lowercase version of the network name, for example,
ctlplane
. - 3
- The subnet specifications.
- 4
- The name of the subnet, for example,
ctlplane
. - 5
- Details of the IPv4 subnet with
allocationStart
,allocationEnd
,cidr
,gateway
, and an optional list of routes withdestination
andnexthop
. - 6
- The network attachment definition to connect the network to. In this example, the RHOSP bridge,
br-osp
, is connected to a NIC on each worker. - 7
- The network definition for a composable network. To use the default RHOSP networks, you must create an
OpenStackNetConfig
resource for each network. For information on the default RHOSP networks, see Default Red Hat OpenStack Platform networks. To use different networks, you must create a customnetwork_data.yaml
file. For information on creating a customnetwork_data.yaml
file, see Configuring overcloud networking. - 8
- The network VLAN. For information on the default RHOSP networks, see Default Red Hat OpenStack Platform networks. For more information on virtual machine bridging with the
OpenStackNetConfig
CRD, see Understanding virtual machine bridging with the OpenStackNetConfig CRD.
Optional: Reserve static IP addresses for networks on specific nodes:
spec: ... reservations: controller-0: ipReservations: ctlplane: 172.22.0.120 compute-0: ipReservations: ctlplane: 172.22.0.140 internal_api: 172.17.0.40 storage: 172.18.0.40 tenant: 172.20.0.40
NoteReservations have precedence over any autogenerated IP addresses.
-
Save the
openstacknetconfig.yaml
definition file. Create the overcloud network:
$ oc create -f openstacknetconfig.yaml -n openstack
To verify that the overcloud network is created, view the resources for the overcloud network:
$ oc get openstacknetconfig/openstacknetconfig
View the
OpenStackNetConfig
API and child resources:$ oc get openstacknetconfig/openstacknetconfig -n openstack $ oc get openstacknetattachment -n openstack $ oc get openstacknet -n openstack
If you see errors, check the underlying
network-attach-definition
and node network configuration policies:$ oc get network-attachment-definitions -n openstack $ oc get nncp
3.1.1. Default Red Hat OpenStack Platform networks
Network | VLAN | CIDR | Allocation |
---|---|---|---|
External | 10 | 10.0.0.0/24 | 10.0.0.10 - 10.0.0.250 |
InternalApi | 20 | 172.17.0.0/24 | 172.17.0.10 - 172.17.0.250 |
Storage | 30 | 172.18.0.0/24 | 172.18.0.10 - 172.18.0.250 |
StorageMgmt | 40 | 172.19.0.0/24 | 172.19.0.10 - 172.19.250 |
Tenant | 50 | 172.20.0.0/24 | 172.20.0.10 - 172.20.0.250 |
3.2. Understanding virtual machine bridging with the OpenStackNetConfig
CRD
When you create virtual machines (VMs) with the OpenStackVMSet
CRD, you must connect these VMs to the relevant Red Hat OpenStack Platform (RHOSP) networks. You can use the OpenStackNetConfig
CRD to create the required bridges on the Red Hat OpenShift Container Platform (RHOCP) worker nodes and connect your Controller VMs to your RHOSP overcloud networks. RHOSP requires dedicated NICs to deploy.
The OpenStackNetConfig
CRD includes an attachConfigurations
option, which is a hash of nodeNetworkConfigurationPolicy
. Each specified attachConfiguration
in an OpenStackNetConfig
custom resource (CR) creates a NetworkAttachmentDefinition
object, which passes network interface data to the NodeNetworkConfigurationPolicy
resource in the RHOCP cluster. The NodeNetworkConfigurationPolicy
resource uses the nmstate
API to configure the end state of the network configuration on each RHOCP worker node. The NetworkAttachmentDefinition
object for each network defines the Multus CNI plugin configuration. When you specify the VLAN ID for the NetworkAttachmentDefinition
object, the Multus CNI plugin enables vlan-filtering
on the bridge. Each network configured in the OpenStackNetConfig
CR references one of the attachConfigurations
. Inside the VMs, there is one interface for each network.
The following example creates a br-osp attachConfiguration
, and configures the nodeNetworkConfigurationPolicy
option to create a Linux bridge and connect the bridge to a NIC on each worker. When you apply this configuration, the NodeNetworkConfigurationPolicy
object configures each RHOCP worker node to match the required end state: each worker contains a new bridge named br-osp
, which is connected to the enp6s0
NIC on each host. All RHOSP Controller VMs can connect to the br-osp
bridge for control plane network traffic.
apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackNetConfig metadata: name: openstacknetconfig spec: attachConfigurations: br-osp: nodeNetworkConfigurationPolicy: nodeSelector: node-role.kubernetes.io/worker: "" desiredState: interfaces: - bridge: options: stp: enabled: false port: - name: enp6s0 description: Linux bridge with enp6s0 as a port name: br-osp state: up type: linux-bridge mtu: 1500 ... networks: - name: Control nameLower: ctlplane subnets: - name: ctlplane ipv4: allocationEnd: 192.168.25.250 allocationStart: 192.168.25.100 cidr: 192.168.25.0/24 gateway: 192.168.25.1 attachConfiguration: br-osp
If you specify an Internal API network through VLAN 20, you can set the attachConfiguration
option to modify the networking configuration on each RHOCP worker node and connect the VLAN to the existing br-osp
bridge:
apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackNetConfig metadata: name: openstacknetconfig spec: attachConfigurations: br-osp: ... networks: ... - isControlPlane: false mtu: 1500 name: InternalApi nameLower: internal_api subnets: - attachConfiguration: br-osp ipv4: allocationEnd: 172.17.0.250 allocationStart: 172.17.0.10 cidr: 172.17.0.0/24 gateway: 172.17.0.1 routes: - destination: 172.17.1.0/24 nexthop: 172.17.0.1 - destination: 172.17.2.0/24 nexthop: 172.17.0.1 name: internal_api vlan: 20
The br-osp
already exists and is connected to the enp6s0
NIC on each host, so no change occurs to the bridge itself. However, the InternalAPI
OpenStackNet
associates VLAN 20 to this network, which means RHOSP Controller VMs can connect to the VLAN 20 on the br-osp
bridge for Internal API network traffic.
When you create VMs with the OpenStackVMSet
CRD, the VMs use multiple Virtio devices connected to each network. OpenShift Virtualization sorts the network names in alphabetical order except for the default
network, which is the RHOCP cluster network that all the pods are connected to. The default
network is always the first interface. You cannot configure the default
network. For example, if you create the default RHOSP networks with OpenStackNetConfig
, the following interface configuration is generated for Controller VMs:
interfaces: - masquerade: {} model: virtio name: default - bridge: {} model: virtio name: ctlplane - bridge: {} model: virtio name: external - bridge: {} model: virtio name: internalapi - bridge: {} model: virtio name: storage - bridge: {} model: virtio name: storagemgmt - bridge: {} model: virtio name: tenant
This configuration results in the following network-to-interface mapping for Controller nodes:
Network | Interface |
---|---|
default | nic1 |
ctlplane | nic2 |
external | nic3 |
internalapi | nic4 |
storage | nic5 |
storagemgmt | nic6 |
tenant | nic7 |
The role NIC template used by OpenStackVMSet
is auto generated. You can overwrite the default configuration in a custom NIC template file, <role>-nic-template.j2
, for example, controller-nic-template.j2
. You must add your custom NIC file to the tarball file that contains your overcloud configuration, which is implemented by using an OpenShift ConfigMap object. For more information, see Chapter 4. Customizing the overcloud with director Operator.
Additional resources
3.3. Example OpenStackNetConfig
custom resource file
The following example OpenStackNetConfig
custom resource (CR) file defines an overcloud network which includes a control plane network and isolated VLAN networks for a default RHOSP deployment. The example also reserves static IP addresses for networks on specific nodes.
apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackNetConfig metadata: name: openstacknetconfig spec: attachConfigurations: br-osp: nodeNetworkConfigurationPolicy: nodeSelector: node-role.kubernetes.io/worker: "" desiredState: interfaces: - bridge: options: stp: enabled: false port: - name: enp7s0 description: Linux bridge with enp7s0 as a port name: br-osp state: up type: linux-bridge mtu: 9000 - name: enp7s0 description: Configuring enp7s0 on workers type: ethernet state: up mtu: 9000 br-ex: nodeNetworkConfigurationPolicy: nodeSelector: node-role.kubernetes.io/worker: "" desiredState: interfaces: - bridge: options: stp: enabled: false port: - name: enp6s0 description: Linux bridge with enp6s0 as a port name: br-ex state: up type: linux-bridge mtu: 1500 # optional DnsServers list dnsServers: - 172.22.0.1 # optional DnsSearchDomains list dnsSearchDomains: - osptest.test.metalkube.org - some.other.domain # DomainName of the OSP environment domainName: osptest.test.metalkube.org networks: - name: Control nameLower: ctlplane subnets: - name: ctlplane ipv4: allocationEnd: 172.22.0.250 allocationStart: 172.22.0.10 cidr: 172.22.0.0/24 gateway: 172.22.0.1 attachConfiguration: br-osp - name: InternalApi nameLower: internal_api mtu: 1350 subnets: - name: internal_api attachConfiguration: br-osp vlan: 20 ipv4: allocationEnd: 172.17.0.250 allocationStart: 172.17.0.10 cidr: 172.17.0.0/24 gateway: 172.17.0.1 routes: - destination: 172.17.1.0/24 nexthop: 172.17.0.1 - destination: 172.17.2.0/24 nexthop: 172.17.0.1 - name: External nameLower: external subnets: - name: external ipv4: allocationEnd: 10.0.0.250 allocationStart: 10.0.0.10 cidr: 10.0.0.0/24 gateway: 10.0.0.1 attachConfiguration: br-ex - name: Storage nameLower: storage mtu: 1500 subnets: - name: storage ipv4: allocationEnd: 172.18.0.250 allocationStart: 172.18.0.10 cidr: 172.18.0.0/24 vlan: 30 attachConfiguration: br-osp - name: StorageMgmt nameLower: storage_mgmt mtu: 1500 subnets: - name: storage_mgmt ipv4: allocationEnd: 172.19.0.250 allocationStart: 172.19.0.10 cidr: 172.19.0.0/24 vlan: 40 attachConfiguration: br-osp - name: Tenant nameLower: tenant vip: False mtu: 1500 subnets: - name: tenant ipv4: allocationEnd: 172.20.0.250 allocationStart: 172.20.0.10 cidr: 172.20.0.0/24 vlan: 50 attachConfiguration: br-osp reservations: compute-0: ipReservations: ctlplane: 172.22.0.140 internal_api: 172.17.0.40 storage: 172.18.0.40 tenant: 172.20.0.40 macReservations: {} controller-0: ipReservations: ctlplane: 172.22.0.120 external: 10.0.0.20 internal_api: 172.17.0.20 storage: 172.18.0.20 storage_mgmt: 172.19.0.20 tenant: 172.20.0.20 macReservations: {} controller-1: ipReservations: ctlplane: 172.22.0.130 external: 10.0.0.30 internal_api: 172.17.0.30 storage: 172.18.0.30 storage_mgmt: 172.19.0.30 tenant: 172.20.0.30 macReservations: {} //The key for the ctlplane VIPs controlplane: ipReservations: ctlplane: 172.22.0.110 external: 10.0.0.10 internal_api: 172.17.0.10 storage: 172.18.0.10 storage_mgmt: 172.19.0.10 macReservations: {} openstackclient-0: ipReservations: ctlplane: 172.22.0.251 external: 10.0.0.251 internal_api: 172.17.0.251 macReservations: {}
Chapter 4. Customizing the overcloud with director Operator
You can customize your overcloud or enable certain features by creating heat templates and environment files that you include with your overcloud deployment. With a director Operator (OSPdO) overcloud deployment, you store these files in ConfigMap
objects before running the overcloud deployment.
4.1. Adding custom templates to the overcloud configuration
Director Operator (OSPdO) converts a core set of overcloud heat templates into Ansible playbooks that you apply to provisioned nodes when you are ready to configure the Red Hat OpenStack Platform (RHOSP) software on each node. To add your own custom heat templates and custom roles file into the overcloud deployment, you must archive the template files into a tarball file and include the binary contents of the tarball file in an OpenShift ConfigMap
object named tripleo-tarball-config
. This tarball file can contain complex directory structures to extend the core set of templates. OSPdO extracts the files and directories from the tarball file into the same directory as the core set of heat templates. If any of your custom templates have the same name as a template in the core collection, the custom template overrides the core template.
All references in the environment files must be relative to the TripleO heat templates where the tarball is extracted.
Prerequisites
- The custom overcloud templates that you want to apply to provisioned nodes.
Procedure
Navigate to the location of your custom templates:
$ cd ~/custom_templates
Archive the templates into a gzipped tarball:
$ tar -cvzf custom-config.tar.gz *.yaml
Create the
tripleo-tarball-config ConfigMap
CR and use the tarball as data:$ oc create configmap tripleo-tarball-config --from-file=custom-config.tar.gz -n openstack
Verify that the
ConfigMap
CR is created:$ oc get configmap/tripleo-tarball-config -n openstack
Additional resources
4.2. Adding custom environment files to the overcloud configuration
To enable features or set parameters in the overcloud, you must include environment files with your overcloud deployment. Director Operator (OSPdO) uses a ConfigMap
object named heat-env-config
to store and retrieve environment files. The ConfigMap
object stores the environment files in the following format:
... data: <environment_file_name>: |+ <environment_file_contents>
For example, the following ConfigMap
contains two environment files:
... data: network_environment.yaml: |+ parameter_defaults: ComputeNetworkConfigTemplate: 'multiple_nics_vlans_dvr.j2' cloud_name.yaml: |+ parameter_defaults: CloudDomain: ocp4.example.com CloudName: overcloud.ocp4.example.com CloudNameInternal: overcloud.internalapi.ocp4.example.com CloudNameStorage: overcloud.storage.ocp4.example.com CloudNameStorageManagement: overcloud.storagemgmt.ocp4.example.com CloudNameCtlplane: overcloud.ctlplane.ocp4.example.com
Upload a set of custom environment files from a directory to a ConfigMap
object that you can include as a part of your overcloud deployment.
Prerequisites
- The custom environment files for your overcloud deployment.
Procedure
Create the
heat-env-config ConfigMap
object:$ oc create configmap -n openstack heat-env-config \ --from-file=~/<dir_custom_environment_files>/ \ --dry-run=client -o yaml | oc apply -f -
-
Replace
<dir_custom_environment_files>
with the directory that contains the environment files you want to use in your overcloud deployment. TheConfigMap
object stores these as individualdata
entries.
-
Replace
Verify that the
heat-env-config ConfigMap
object contains all the required environment files:$ oc get configmap/heat-env-config -n openstack
4.3. Additional resources
Chapter 5. Creating overcloud nodes with director Operator
A Red Hat OpenStack Platform (RHOSP) overcloud consists of multiple nodes, such as Controller nodes to provide control plane services and Compute nodes to provide computing resources. For a functional overcloud with high availability, you must have 3 Controller nodes and at least one Compute node. You can create Controller nodes with the OpenStackControlPlane
Custom Resource Definition (CRD) and Compute nodes with the OpenStackBaremetalSet
CRD.
Red Hat OpenShift Container Platform (RHOCP) does not autodiscover issues on RHOCP worker nodes, or perform autorecovery of worker nodes that host RHOSP Controller VMs if the worker node fails or has an issue. You must enable health checks on your RHOCP cluster to automatically relocate Controller VM pods when a host worker node fails. For information on how to autodiscover issues on RHOCP worker nodes, see Deploying machine health checks.
5.1. Creating a control plane with the OpenStackControlPlane CRD
The Red Hat OpenStack Platform (RHOSP) control plane contains the RHOSP services that manage the overcloud. The default control plane consists of 3 Controller nodes. You can use composable roles to manage services on dedicated controller virtual machines (VMs). For more information on composable roles, see Composable services and custom roles.
Define an OpenStackControlPlane
custom resource (CR) to create the Controller nodes as OpenShift Virtualization virtual machines (VMs).
Use the following commands to view the OpenStackControlPlane
CRD definition and specification schema:
$ oc describe crd openstackcontrolplane $ oc explain openstackcontrolplane.spec
Prerequisites
-
You have used the
OpenStackNetConfig
CR to create a control plane network and any additional isolated networks.
Procedure
Create a file named
openstack-controller.yaml
on your workstation. Include the resource specification for the Controller nodes. The following example defines a specification for a control plane that consists of 3 Controller nodes:apiVersion: osp-director.openstack.org/v1beta2 kind: OpenStackControlPlane metadata: name: overcloud 1 namespace: openstack 2 spec: 3 openStackClientNetworks: - ctlplane - internal_api - external openStackClientStorageClass: host-nfs-storageclass passwordSecret: userpassword 4 virtualMachineRoles: Controller: roleName: Controller roleCount: 3 networks: - ctlplane - internal_api - external - tenant - storage - storage_mgmt cores: 12 memory: 64 rootDisk: diskSize: 500 baseImageVolumeName: openstack-base-img 5 storageClass: host-nfs-storageclass 6 storageAccessMode: ReadWriteMany storageVolumeMode: Filesystem # optional configure additional discs to be attached to the VMs, # need to be configured manually inside the VMs where to be used. additionalDisks: - name: datadisk diskSize: 500 storageClass: host-nfs-storageclass storageAccessMode: ReadWriteMany storageVolumeMode: Filesystem openStackRelease: "17.1"
- 1
- The name of the overcloud control plane, for example,
overcloud
. - 2
- The OSPdO namespace, for example,
openstack
. - 3
- The configuration for the control plane.
- 4
- Optional: The
Secret
resource that provides root access on each node to users with the password. - 5
- The name of the data volume that stores the base operating system image for your Controller VMs. For more information on creating the data volume, see Creating a data volume for the base operating system.
- 6
- For information on configuring Red Hat OpenShift Container Platform (RHOCP) storage, see Dynamic provisioning.
-
Save the
openstack-controller.yaml
file. Create the control plane:
$ oc create -f openstack-controller.yaml -n openstack
-
Wait until RHOCP creates the resources related to
OpenStackControlPlane
CR. OSPdO also creates anOpenStackClient
pod that you can access through a remote shell to run RHOSP commands.
Verification
View the resource for the control plane:
$ oc get openstackcontrolplane/overcloud -n openstack
View the
OpenStackVMSet
resources to verify the creation of the control plane VM set:$ oc get openstackvmsets -n openstack
View the VMs to verify the creation of the control plane OpenShift Virtualization VMs:
$ oc get virtualmachines -n openstack
Test access to the
openstackclient
remote shell:$ oc rsh -n openstack openstackclient
5.2. Creating Compute nodes with the OpenStackBaremetalSet
CRD
Compute nodes provide computing resources to your Red Hat OpenStack Platform (RHOSP) environment. You must have at least one Compute node in your overcloud and you can scale the number of Compute nodes after deployment.
Define an OpenStackBaremetalSet
custom resource (CR) to create Compute nodes from bare-metal machines that the Red Hat OpenShift Container Platform (RHOCP) manages.
Use the following commands to view the OpenStackBareMetalSet
CRD definition and specification schema:
$ oc describe crd openstackbaremetalset $ oc explain openstackbaremetalset.spec
Prerequisites
-
You have used the
OpenStackNetConfig
CR to create a control plane network and any additional isolated networks. -
You have created a control plane with the
OpenStackControlPlane
CRD. -
You have created a
BareMetalHost
CR for each bare-metal node that you want to add as a Compute node to the overcloud. For information about how to create aBareMetalHost
CR, see About the BareMetalHost resource in the Red Hat OpenShift Container Platform (RHOCP) Postinstallation configuration guide.
Procedure
Create a file named
openstack-compute.yaml
on your workstation. Include the resource specification for the Compute nodes. The following example defines a specification for 1 Compute node:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackBaremetalSet metadata: name: compute 1 namespace: openstack 2 spec: 3 count: 1 baseImageUrl: http://<source_host>/rhel-9.2-x86_64-kvm.qcow2 deploymentSSHSecret: osp-controlplane-ssh-keys # If you manually created an OpenStackProvisionServer, you can use it here, # otherwise director Operator will create one for you (with `baseImageUrl` as the image that it server) # to use with this OpenStackBaremetalSet # provisionServerName: openstack-provision-server ctlplaneInterface: enp2s0 networks: - ctlplane - internal_api - tenant - storage roleName: Compute passwordSecret: userpassword 4
-
Save the
openstack-compute.yaml
file. Create the Compute nodes:
$ oc create -f openstack-compute.yaml -n openstack
Verification
View the resource for the Compute nodes:
$ oc get openstackbaremetalset/compute -n openstack
View the bare-metal machines that RHOCP manages to verify the creation of the Compute nodes:
$ oc get baremetalhosts -n openshift-machine-api
5.3. Creating a provisioning server with the OpenStackProvisionServer
CRD
Provisioning servers provide a specific Red Hat Enterprise Linux (RHEL) QCOW2 image for provisioning Compute nodes for the Red Hat OpenStack Platform (RHOSP). An OpenStackProvisionServer
CR is automatically created for any OpenStackBaremetalSet
CRs you create. You can create the OpenStackProvisionServer
CR manually and provide the name to any OpenStackBaremetalSet
CRs that you create.
The OpenStackProvisionServer
CRD creates an Apache server on the Red Hat OpenShift Container Platform (RHOCP) provisioning network for a specific RHEL QCOW2 image.
Procedure
Create a file named
openstack-provision.yaml
on your workstation. Include the resource specification for the Provisioning server. The following example defines a specification for a Provisioning server using a specific RHEL 9.2 QCOW2 images:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackProvisionServer metadata: name: openstack-provision-server 1 namespace: openstack 2 spec: baseImageUrl: http://<source_host>/rhel-9.2-x86_64-kvm.qcow2 3 port: 8080 4
- 1
- The name that identifies the
OpenStackProvisionServer
CR. - 2
- The OSPdO namespace, for example,
openstack
. - 3
- The initial source of the RHEL QCOW2 image for the Provisioning server. The image is downloaded from this remote source when the server is created.
- 4
- The Provisioning server port, set to 8080 by default. You can change it for a specific port configuration.
For further descriptions of the values you can use to configure your
OpenStackProvisionServer
CR, view theOpenStackProvisionServer
CRD specification schema:$ oc describe crd openstackprovisionserver
-
Save the
openstack-provision.yaml
file. Create the Provisioning Server:
$ oc create -f openstack-provision.yaml -n openstack
Verify that the resource for the Provisioning server is created:
$ oc get openstackprovisionserver/openstack-provision-server -n openstack
Chapter 6. Configuring and deploying the overcloud with director Operator
You can configure your overcloud nodes after you have provisioned virtual and bare-metal nodes for your overcloud. You must create an OpenStackConfigGenerator
resource to generate your Ansible playbooks, register your nodes to either the Red Hat Customer Portal or Red Hat Satellite, and then create an OpenStackDeploy
resource to apply the configuration to your nodes.
6.1. Creating Ansible playbooks for overcloud configuration with the OpenStackConfigGenerator CRD
After you provision the overcloud infrastructure, you must create a set of Ansible playbooks to configure Red Hat OpenStack Platform (RHOSP) on the overcloud nodes. You use the OpenStackConfigGenerator
custom resource definition (CRD) to create these playbooks. The OpenStackConfigGenerator
CRD uses the RHOSP director config-download
feature to convert heat configuration to playbooks.
Use the following commands to view the OpenStackConfigGenerator
CRD definition and specification schema:
$ oc describe crd openstackconfiggenerator $ oc explain openstackconfiggenerator.spec
Prerequisites
-
You have created a control plane with the`
OpenStackControlPlane
CRD. -
You have created Compute nodes with the`
OpenStackBarementalSets
CRD. -
You have created a
ConfigMap
object that contains your custom heat templates. -
You have created a
ConfigMap
object that contains your custom environment files. -
You have manually installed the
fence-agents-kubevirt
package on the Controller VMs.
Procedure
Create a file named
openstack-config-generator.yaml
on your workstation. Include the resource specification to generate the Ansible playbooks. The following example defines a specification to generate the playbooks:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackConfigGenerator metadata: name: default 1 namespace: openstack spec: enableFencing: true 2 gitSecret: git-secret 3 imageURL: registry.redhat.io/rhosp-rhel8/openstack-tripleoclient:17.1 heatEnvConfigMap: heat-env-config 4 # List of heat environment files to include from tripleo-heat-templates/environments heatEnvs: 5 - ssl/tls-endpoints-public-dns.yaml - ssl/enable-tls.yaml tarballConfigMap: tripleo-tarball-config 6
- 1
- The name of the config generator, which is
default
by default. - 2
- Set to
true
to enable the automatic creation of required heat environment files to enable fencing. Production RHOSP environments must have fencing enabled. Virtual machines running Pacemaker require thefence-agents-kubevirt
package. - 3
- Set to the
ConfigMap
object that contains the Git authentication credentials, by defaultgit-secret
. - 4
- The
ConfigMap
object that contains your custom environment files, by defaultheat-env-config
. - 5
- A list of the default heat environment files, provided by TripleO in the
tripleo-heat-templates/environments
directory, to use to generate the playbooks. - 6
- The
ConfigMap
object that contains the tarball with your custom heat templates, by defaulttripleo-tarball-config
.
Optional: To change the location of the container images the
OpenStackConfigGenerator
CR uses to create the ephemeral heat service, add the following configuration to youropenstack-config-generator.yaml
file:spec: ... ephemeralHeatSettings: heatAPIImageURL: <heat_api_image_location> heatEngineImageURL: <heat_engine_image_location> mariadbImageURL: <mariadb_image_location> rabbitImageURL: <rabbitmq_image_location>
-
Replace
<heat_api_image_location>
with the path to the directory where you host your heat API image,openstack-heat-api
. -
Replace
<heat_engine_image_location>
with the path to the directory where you host your heat engine image,openstack-heat-engine
. -
Replace
<mariadb_image_location>
with the path to the directory where you host your MariaDB image,openstack-mariadb
. -
Replace
<rabbitmq_image_location>
with the path to the directory where you host your RabbitMQ image,openstack-rabbitmq
.
-
Replace
Optional: To create the Ansible playbooks for configuration generation in debug mode, add the following configuration to your
openstack-config-generator.yaml
file:spec: ... interactive: true
For more information on debugging an
OpenStackConfigGenerator
pod in interactive mode, see Debugging configuration generation.-
Save the
openstack-config-generator.yaml
file. Create the Ansible config generator:
$ oc create -f openstack-config-generator.yaml -n openstack
Verify that the resource for the config generator is created:
$ oc get openstackconfiggenerator/default -n openstack
6.2. Registering the operating system of your overcloud
Before director Operator (OSPdO) configures the overcloud nodes, you must register the operating system of all nodes to either the Red Hat Customer Portal or Red Hat Satellite Server, and enable repositories for your nodes.
As part of the OpenStackControlPlane
CR, OSPdO creates an OpenStackClient
pod that you access through a Remote Shell (RSH) to run Red Hat OpenStack Platform (RHOSP) commands. This pod also contains an Ansible inventory script named /home/cloud-admin/ctlplane-ansible-inventory
.
To register your nodes, you can use the redhat_subscription
Ansible module with the inventory script from the OpenStackClient pod.
Procedure
Open an RSH connection to the
OpenStackClient
pod:$ oc rsh -n openstack openstackclient
Change to the
cloud-admin
home directory:$ cd /home/cloud-admin
Create a playbook that uses the
redhat_subscription
modules to register your nodes. For example, the following playbook registers Controller nodes:--- - name: Register Controller nodes hosts: Controller become: yes vars: repos: - rhel-9-for-x86_64-baseos-eus-rpms - rhel-9-for-x86_64-appstream-eus-rpms - rhel-9-for-x86_64-highavailability-eus-rpms - openstack-17.1-for-rhel-9-x86_64-rpms - fast-datapath-for-rhel-9-x86_64-rpms - rhceph-6-tools-for-rhel-9-x86_64-rpms tasks: - name: Register system 1 redhat_subscription: username: myusername password: p@55w0rd! org_id: 1234567 release: 9.2 pool_ids: 1a85f9223e3d5e43013e3d6e8ff506fd - name: Disable all repos 2 command: "subscription-manager repos --disable *" - name: Enable Controller node repos 3 command: "subscription-manager repos --enable {{ item }}" with_items: "{{ repos }}"
Register the overcloud nodes to the required repositories:
$ ansible-playbook -i /home/cloud-admin/ctlplane-ansible-inventory ./rhsm.yaml
6.3. Applying overcloud configuration with director Operator
You can configure the overcloud with director Operator (OSPdO) only after you have created your control plane, provisioned your bare metal Compute nodes, and generated the Ansible playbooks to configure software on each node. When you create an OpenStackDeploy
custom resource (CR), OSPdO creates a job that runs the Ansible playbooks to configure the overcloud.
Use the following commands to view the OpenStackDeploy
CRD definition and specification schema:
$ oc describe crd openstackdeploy $ oc explain openstackdeploy.spec
Prerequisites
-
You have created a control plane with the
OpenStackControlPlane
CRD. -
You have created Compute nodes with the
OpenStackBarementalSets
CRD. -
You have used the
OpenStackConfigGenerator
CRD to create the Ansible playbook configuration for your overcloud.
Procedure
Retrieve the
hash/digest
of the latestOpenStackConfigVersion
object, which represents the Ansible playbooks that should be used to configure the overcloud:$ oc get -n openstack --sort-by {.metadata.creationTimestamp} openstackconfigversion -o json
Create a file named
openstack-deployment.yaml
on your workstation and include the resource specification to the Ansible playbooks:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: default spec: configVersion: <config_version> configGenerator: default
-
Replace
<config_version>
with the Ansible playbookshash/digest
retrieved in step 1, for example,n5fch96h548h75hf4hbdhb8hfdh676h57bh96h5c5h59hf4h88h...
.
-
Replace
-
Save the
openstack-deployment.yaml
file. Create the
OpenStackDeploy
resource:$ oc create -f openstack-deployment.yaml -n openstack
As the deployment runs, it creates a Kubernetes job to execute the Ansible playbooks. You can view the logs of the job to watch the Ansible playbooks running:
$ oc logs -f jobs/deploy-openstack-default
You can also manually access the executed Ansible playbooks by logging into the
openstackclient
pod. You can find the ansible playbooks and theansible.log
file for the current deployment in/home/cloud-admin/work/directory
.
6.4. Debugging configuration generation
To debug configuration generation operations, you can set the OpenStackConfigGenerator
CR to use interactive mode. In interactive mode, the OpenStackConfigGenerator
CR creates the environment to start rendering the playbooks, but does not automatically render the playbooks.
Prerequisites
Your
OpenStackConfigGenerator
CR was created in interactive mode:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackConfigGenerator metadata: name: default namespace: openstack spec: … interactive: true
-
The
OpenStackConfigGenerator
pod with the prefixgenerate-config
has started.
Procedure
Open a Remote Shell (RSH) connection to the
OpenStackConfigGenerator
pod:$ oc rsh $(oc get pod -o name -l job-name=generate-config-default)
Inspect the files and playbook rendering:
$ ls -la /home/cloud-admin/ ... config 1 config-custom 2 config-passwords 3 create-playbooks.sh 4 process-heat-environment.py 5 tht-tars 6
- 1
- Directory that stores the files auto-rendered by OSPdO.
- 2
- Directory that stores the environment files specified with the
heatEnvConfigMap
option. - 3
- Directory that stores the overcloud service passwords created by OSPdO.
- 4
- Script that renders the Ansible playbooks.
- 5
- Internal script used by
create-playbooks
to replicate the undocumented heat client merging of map parameters. - 6
- Directory that stores the tarball specified with the
tarballConfigMap
option.
Chapter 7. Deploying a RHOSP hyperconverged infrastructure (HCI) with director Operator
You can use director Operator (OSPdO) to deploy an overcloud with hyperconverged infrastructure (HCI). An overcloud with HCI colocates Compute and Red Hat Ceph Storage OSD services on the same nodes.
7.1. Prerequisites
- Your Compute HCI nodes require extra disks to use as OSDs.
- You have installed and prepared OSPdO on an operational Red Hat OpenShift Container Platform (RHOCP) cluster. For more information, see Installing and preparing director Operator.
-
You have created the overcloud networks by using the
OpenStackNetConfig
custom resource definition (CRD), including the control plane and any isolated networks. For more information, see Creating networks with director Operator. -
You have created
ConfigMaps
to store any custom heat templates and environment files for your overcloud. For more information, see Customizing the overcloud with director Operator. - You have created a control plane and bare-metal Compute nodes for your overcloud. For more information, see Creating overcloud nodes with director Operator.
-
You have created and applied an
OpenStackConfigGenerator
cusstom resource to render Ansible playbooks for overcloud configuration.
7.2. Creating a roles_data.yaml file with the Compute HCI role for director Operator
To include configuration for the Compute HCI role in your overcloud, you must include the Compute HCI role in the roles_data.yaml
file that you include with your overcloud deployment.
Ensure that you use roles_data.yaml
as the file name.
Procedure
Access the remote shell for
openstackclient
:$ oc rsh -n openstack openstackclient
Unset the
OS_CLOUD
environment variable:$ unset OS_CLOUD
Change to the
cloud-admin
directory:$ cd /home/cloud-admin/
Generate a new
roles_data.yaml
file with theController
andComputeHCI
roles:$ openstack overcloud roles generate -o roles_data.yaml Controller ComputeHCI
Exit the
openstackclient
pod:$ exit
Copy the custom
roles_data.yaml
file from theopenstackclient
pod to your custom templates directory:$ oc cp openstackclient:/home/cloud-admin/roles_data.yaml custom_templates/roles_data.yaml -n openstack
Additional resources
7.3. Configuring HCI networking in director Operator
Create directories on your workstation to store your custom templates and environment files, and configure the NIC templates for your Compute HCI role.
Procedure
Create a directory for your custom templates:
$ mkdir custom_templates
-
Create a custom template file named
multiple_nics_vlans_dvr.j2
in yourcustom_templates
directory. -
Add configuration for the NICs of your bare-metal nodes to your
multiple_nics_vlans_dvr.j2
file. For an example NIC configuration file, see Custom NIC heat template for HCI Compute nodes. Create a directory for your custom environment files:
$ mkdir custom_environment_files
Map the NIC template for your overcloud role in the
network-environment.yaml
environment file in yourcustom_environment_files
directory:parameter_defaults: ComputeHCINetworkConfigTemplate: 'multiple_nics_vlans_dvr.j2'
Additional resources
7.4. Custom NIC heat template for HCI Compute nodes
The following example is a heat template that contains NIC configuration for the HCI Compute bare metal nodes. The configuration in the heat template maps the networks to the following bridges and interfaces:
Networks | Bridge | Interface |
---|---|---|
Control Plane, Storage, Internal API | N/A |
|
External, Tenant |
|
|
To use the following template in your deployment, copy the example to multiple_nics_vlans_dvr.j2
in your custom_templates
directory on your workstation. You can modify this configuration for the NIC configuration of your bare-metal nodes.
Example
{% set mtu_list = [ctlplane_mtu] %} {% for network in role_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: # BMH provisioning interface used for ctlplane - type: interface name: nic1 mtu: 1500 use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_subnet_cidr }} routes: {{ ctlplane_host_routes }} # Disable OCP cluster interface - type: interface name: nic2 mtu: 1500 use_dhcp: false {% for network in networks_all if network not in networks_skip_config|default([]) %} {% if network == 'External' %} - type: ovs_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} dns_servers: {{ ctlplane_dns_nameservers }} use_dhcp: false {% if network in role_networks %} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endif %} members: - type: interface name: nic3 mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} primary: true {% endif %} {% endfor %} - type: ovs_bridge name: br-tenant mtu: {{ min_viable_mtu }} use_dhcp: false members: - type: interface name: nic4 mtu: {{ min_viable_mtu }} use_dhcp: false primary: true {% for network in networks_all if network not in networks_skip_config|default([]) %} {% if network not in ["External"] and network in role_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endif %} {% endfor %}
7.5. Adding custom templates to the overcloud configuration
Director Operator (OSPdO) converts a core set of overcloud heat templates into Ansible playbooks that you apply to provisioned nodes when you are ready to configure the Red Hat OpenStack Platform (RHOSP) software on each node. To add your own custom heat templates and custom roles file into the overcloud deployment, you must archive the template files into a tarball file and include the binary contents of the tarball file in an OpenShift ConfigMap
object named tripleo-tarball-config
. This tarball file can contain complex directory structures to extend the core set of templates. OSPdO extracts the files and directories from the tarball file into the same directory as the core set of heat templates. If any of your custom templates have the same name as a template in the core collection, the custom template overrides the core template.
All references in the environment files must be relative to the TripleO heat templates where the tarball is extracted.
Prerequisites
- The custom overcloud templates that you want to apply to provisioned nodes.
Procedure
Navigate to the location of your custom templates:
$ cd ~/custom_templates
Archive the templates into a gzipped tarball:
$ tar -cvzf custom-config.tar.gz *.yaml
Create the
tripleo-tarball-config ConfigMap
CR and use the tarball as data:$ oc create configmap tripleo-tarball-config --from-file=custom-config.tar.gz -n openstack
Verify that the
ConfigMap
CR is created:$ oc get configmap/tripleo-tarball-config -n openstack
Additional resources
7.6. Custom environment file for configuring Hyperconverged Infrastructure (HCI) storage in director Operator
The following example is an environment file that contains Red Hat Ceph Storage configuration for the Compute HCI nodes. This configuration maps the OSD nodes to the sdb
, sdc
, and sdd
devices and enables HCI with the is_hci
option.
You can modify this configuration to suit the storage configuration of your bare-metal nodes. Use the "Ceph Placement Groups (PGs) per Pool Calculator" to determine the value for the CephPoolDefaultPgNum
parameter.
To use this template in your deployment, copy the contents of the example to compute-hci.yaml
in your custom_environment_files
directory on your workstation.
resource_registry: OS::TripleO::Services::CephMgr: deployment/cephadm/ceph-mgr.yaml OS::TripleO::Services::CephMon: deployment/cephadm/ceph-mon.yaml OS::TripleO::Services::CephOSD: deployment/cephadm/ceph-osd.yaml OS::TripleO::Services::CephClient: deployment/cephadm/ceph-client.yaml parameter_defaults: CephDynamicSpec: true CephSpecFqdn: true CephConfigOverrides: rgw_swift_enforce_content_length: true rgw_swift_versioning_enabled: true osd: osd_memory_target_autotune: true osd_numa_auto_affinity: true mgr: mgr/cephadm/autotune_memory_target_ratio: 0.2 CinderEnableIscsiBackend: false CinderEnableRbdBackend: true CinderBackupBackend: ceph CinderEnableNfsBackend: false NovaEnableRbdBackend: true GlanceBackend: rbd CinderRbdPoolName: "volumes" NovaRbdPoolName: "vms" GlanceRbdPoolName: "images" CephPoolDefaultPgNum: 32 CephPoolDefaultSize: 2
7.7. Adding custom environment files to the overcloud configuration
To enable features or set parameters in the overcloud, you must include environment files with your overcloud deployment. Director Operator (OSPdO) uses a ConfigMap
object named heat-env-config
to store and retrieve environment files. The ConfigMap
object stores the environment files in the following format:
... data: <environment_file_name>: |+ <environment_file_contents>
For example, the following ConfigMap
contains two environment files:
... data: network_environment.yaml: |+ parameter_defaults: ComputeNetworkConfigTemplate: 'multiple_nics_vlans_dvr.j2' cloud_name.yaml: |+ parameter_defaults: CloudDomain: ocp4.example.com CloudName: overcloud.ocp4.example.com CloudNameInternal: overcloud.internalapi.ocp4.example.com CloudNameStorage: overcloud.storage.ocp4.example.com CloudNameStorageManagement: overcloud.storagemgmt.ocp4.example.com CloudNameCtlplane: overcloud.ctlplane.ocp4.example.com
Upload a set of custom environment files from a directory to a ConfigMap
object that you can include as a part of your overcloud deployment.
Prerequisites
- The custom environment files for your overcloud deployment.
Procedure
Create the
heat-env-config ConfigMap
object:$ oc create configmap -n openstack heat-env-config \ --from-file=~/<dir_custom_environment_files>/ \ --dry-run=client -o yaml | oc apply -f -
-
Replace
<dir_custom_environment_files>
with the directory that contains the environment files you want to use in your overcloud deployment. TheConfigMap
object stores these as individualdata
entries.
-
Replace
Verify that the
heat-env-config ConfigMap
object contains all the required environment files:$ oc get configmap/heat-env-config -n openstack
7.8. Creating HCI Compute nodes and deploying the overcloud
Compute nodes provide computing resources to your Red Hat OpenStack Platform (RHOSP) environment. You must have at least one Compute node in your overcloud and you can scale the number of Compute nodes after deployment.
Define an OpenStackBaremetalSet
custom resource (CR) to create Compute nodes from bare-metal machines that the Red Hat OpenShift Container Platform (RHOCP) manages.
Use the following commands to view the OpenStackBareMetalSet
CRD definition and specification schema:
$ oc describe crd openstackbaremetalset $ oc explain openstackbaremetalset.spec
Prerequisites
-
You have used the
OpenStackNetConfig
CR to create a control plane network and any additional isolated networks. -
You have created a control plane with the
OpenStackControlPlane
CRD.
Procedure
Create a file named
openstack-hcicompute.yaml
on your workstation. Include the resource specification for the HCI Compute nodes. For example, the specification for 3 HCI Compute nodes is as follows:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackBaremetalSet metadata: name: computehci 1 namespace: openstack 2 spec: 3 count: 3 baseImageUrl: http://<source_host>/rhel-9.2-x86_64-kvm.qcow2 deploymentSSHSecret: osp-controlplane-ssh-keys ctlplaneInterface: enp8s0 networks: - ctlplane - internal_api - tenant - storage - storage_mgmt roleName: ComputeHCI passwordSecret: userpassword 4
-
Save the
openstack-hcicompute.yaml
file. Create the HCI Compute nodes:
$ oc create -f openstack-hcicompute.yaml -n openstack
Verify that the resource for the HCI Compute nodes is created:
$ oc get openstackbaremetalset/computehci -n openstack
To verify the creation of the HCI Compute nodes, view the bare-metal machines that RHOCP manages:
$ oc get baremetalhosts -n openshift-machine-api
-
Create the Ansible playbooks for overcloud configuration with the
OpenStackConfigGenerator
CRD. For more information, see Creating Ansible playbooks for overcloud configuration with the OpenStackConfigGenerator CRD. - Register the operating system of your overcloud. For more information, see Registering the operating system of your overcloud.
- Apply the overcloud configuration. For more information, see Applying overcloud configuration with director Operator.
Chapter 8. Deploying RHOSP with an external Red Hat Ceph Storage cluster with director Operator
You can use director Operator (OSPdO) to deploy an overcloud that connects to an external Red Hat Ceph Storage cluster.
Prerequisites
- You have an external Red Hat Ceph Storage cluster.
- You have installed and prepared OSPdO on an operational Red Hat OpenShift Container Platform (RHOCP) cluster. For more information, see Installing and preparing director Operator.
-
You have created the overcloud networks by using the
OpenStackNetConfig
custom resource definition (CRD), including the control plane and any isolated networks. For more information, see Creating networks with director Operator. -
You have created
ConfigMaps
to store any custom heat templates and environment files for your overcloud. For more information, see Customizing the overcloud with director Operator. - You have created a control plane and bare-metal Compute nodes for your overcloud. For more information, see Creating overcloud nodes with director Operator.
-
You have created and applied an
OpenStackConfigGenerator
custom resource to render Ansible playbooks for overcloud configuration.
8.1. Configuring networking for the Compute role in director Operator
Create directories on your workstation to store your custom templates and environment files, and configure the NIC templates for your Compute role.
Procedure
Create a directory for your custom templates:
$ mkdir custom_templates
-
Create a custom template file named
multiple_nics_vlans_dvr.j2
in yourcustom_templates
directory. -
Add configuration for the NICs of your bare-metal Compute nodes to your
multiple_nics_vlans_dvr.j2
file. For an example NIC configuration file, see Custom NIC heat template for Compute nodes. Create a directory for your custom environment files:
$ mkdir custom_environment_files
Map the NIC template for your overcloud role in the
network-environment.yaml
environment file in yourcustom_environment_files
directory:parameter_defaults: ComputeNetworkConfigTemplate: 'multiple_nics_vlans_dvr.j2'
Additional resources
8.2. Custom NIC heat template for Compute nodes
The following example is a heat template that contains NIC configuration for the Compute bare-metal nodes in an overcloud that connects to an external Red Hat Ceph Storage cluster. The configuration in the heat template maps the networks to the following bridges and interfaces:
Networks | Bridge | interface |
---|---|---|
Control Plane, Storage, Internal API | N/A |
|
External, Tenant |
|
|
To use the following template in your deployment, copy the example to multiple_nics_vlans_dvr.j2
in your custom_templates
directory on your workstation. You can modify this configuration for the NIC configuration of your bare-metal nodes.
Example
{% set mtu_list = [ctlplane_mtu] %} {% for network in role_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: # BMH provisioning interface used for ctlplane - type: interface name: nic1 mtu: 1500 use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_subnet_cidr }} routes: {{ ctlplane_host_routes }} # Disable OCP cluster interface - type: interface name: nic2 mtu: 1500 use_dhcp: false {% for network in networks_all if network not in networks_skip_config|default([]) %} {% if network == 'External' %} - type: ovs_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} dns_servers: {{ ctlplane_dns_nameservers }} use_dhcp: false {% if network in role_networks %} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endif %} members: - type: interface name: nic3 mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} primary: true {% endif %} {% endfor %} - type: ovs_bridge name: br-tenant mtu: {{ min_viable_mtu }} use_dhcp: false members: - type: interface name: nic4 mtu: {{ min_viable_mtu }} use_dhcp: false primary: true {% for network in networks_all if network not in networks_skip_config|default([]) %} {% if network not in ["External"] and network in role_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endif %} {% endfor %}
8.3. Adding custom templates to the overcloud configuration
Director Operator (OSPdO) converts a core set of overcloud heat templates into Ansible playbooks that you apply to provisioned nodes when you are ready to configure the Red Hat OpenStack Platform (RHOSP) software on each node. To add your own custom heat templates and custom roles file into the overcloud deployment, you must archive the template files into a tarball file and include the binary contents of the tarball file in an OpenShift ConfigMap
object named tripleo-tarball-config
. This tarball file can contain complex directory structures to extend the core set of templates. OSPdO extracts the files and directories from the tarball file into the same directory as the core set of heat templates. If any of your custom templates have the same name as a template in the core collection, the custom template overrides the core template.
All references in the environment files must be relative to the TripleO heat templates where the tarball is extracted.
Prerequisites
- The custom overcloud templates that you want to apply to provisioned nodes.
Procedure
Navigate to the location of your custom templates:
$ cd ~/custom_templates
Archive the templates into a gzipped tarball:
$ tar -cvzf custom-config.tar.gz *.yaml
Create the
tripleo-tarball-config ConfigMap
CR and use the tarball as data:$ oc create configmap tripleo-tarball-config --from-file=custom-config.tar.gz -n openstack
Verify that the
ConfigMap
CR is created:$ oc get configmap/tripleo-tarball-config -n openstack
Additional resources
8.4. Custom environment file for configuring external Ceph Storage usage in director Operator
To integrate with an external Red Hat Ceph Storage cluster, include an environment file with parameters and values similar to those shown in the following example. The example enables the CephExternal
and CephClient
services on your overcloud nodes, and sets the pools for different RHOSP services.
You can modify this configuration to suit your storage configuration.
To use this template in your deployment, copy the contents of the example to ceph-ansible-external.yaml
in your custom_environment_files
directory on your workstation.
resource_registry: OS::TripleO::Services::CephExternal: deployment/cephadm/ceph-client.yaml parameter_defaults: CephClusterFSID: '4b5c8c0a-ff60-454b-a1b4-9747aa737d19' 1 CephClientKey: 'AQDLOh1VgEp6FRAAFzT7Zw+Y9V6JJExQAsRnRQ==' 2 CephExternalMonHost: '172.16.1.7, 172.16.1.8' 3 ExternalCeph: true # the following parameters enable Ceph backends for Cinder, Glance, Gnocchi and Nova NovaEnableRbdBackend: true CinderEnableRbdBackend: true CinderBackupBackend: ceph GlanceBackend: rbd # Uncomment below if enabling legacy telemetry # GnocchiBackend: rbd # If the Ceph pools which host VMs, Volumes and Images do not match these # names OR the client keyring to use is not named 'openstack', edit the # following as needed. NovaRbdPoolName: vms CinderRbdPoolName: volumes CinderBackupRbdPoolName: backups GlanceRbdPoolName: images # Uncomment below if enabling legacy telemetry # GnocchiRbdPoolName: metrics CephClientUserName: openstack # finally we disable the Cinder LVM backend CinderEnableIscsiBackend: false
8.5. Adding custom environment files to the overcloud configuration
To enable features or set parameters in the overcloud, you must include environment files with your overcloud deployment. Director Operator (OSPdO) uses a ConfigMap
object named heat-env-config
to store and retrieve environment files. The ConfigMap
object stores the environment files in the following format:
... data: <environment_file_name>: |+ <environment_file_contents>
For example, the following ConfigMap
contains two environment files:
... data: network_environment.yaml: |+ parameter_defaults: ComputeNetworkConfigTemplate: 'multiple_nics_vlans_dvr.j2' cloud_name.yaml: |+ parameter_defaults: CloudDomain: ocp4.example.com CloudName: overcloud.ocp4.example.com CloudNameInternal: overcloud.internalapi.ocp4.example.com CloudNameStorage: overcloud.storage.ocp4.example.com CloudNameStorageManagement: overcloud.storagemgmt.ocp4.example.com CloudNameCtlplane: overcloud.ctlplane.ocp4.example.com
Upload a set of custom environment files from a directory to a ConfigMap
object that you can include as a part of your overcloud deployment.
Prerequisites
- The custom environment files for your overcloud deployment.
Procedure
Create the
heat-env-config ConfigMap
object:$ oc create configmap -n openstack heat-env-config \ --from-file=~/<dir_custom_environment_files>/ \ --dry-run=client -o yaml | oc apply -f -
-
Replace
<dir_custom_environment_files>
with the directory that contains the environment files you want to use in your overcloud deployment. TheConfigMap
object stores these as individualdata
entries.
-
Replace
Verify that the
heat-env-config ConfigMap
object contains all the required environment files:$ oc get configmap/heat-env-config -n openstack
8.6. Creating Compute nodes and deploying the overcloud
Compute nodes provide computing resources to your Red Hat OpenStack Platform (RHOSP) environment. You must have at least one Compute node in your overcloud and you can scale the number of Compute nodes after deployment.
Define an OpenStackBaremetalSet
custom resource (CR) to create Compute nodes from bare-metal machines that the Red Hat OpenShift Container Platform (RHOCP) manages.
Use the following commands to view the OpenStackBareMetalSet
CRD definition and specification schema:
$ oc describe crd openstackbaremetalset $ oc explain openstackbaremetalset.spec
Prerequisites
-
You have used the
OpenStackNetConfig
CR to create a control plane network and any additional isolated networks. -
You have created a control plane with the
OpenStackControlPlane
CRD.
Procedure
-
Create your Compute nodes by using the
OpenStackBaremetalSet
CRD. For more information, see Creating Compute nodes with the OpenStackBaremetalSet CRD. -
Create the Ansible playbooks for overcloud configuration with the
OpenStackConfigGenerator
CRD. For more information, see Creating Ansible playbooks for overcloud configuration with the OpenStackConfigGenerator CRD. - Register the operating system of your overcloud. For more information, see Registering the operating system of your overcloud.
- Apply the overcloud configuration. For more information, see Applying overcloud configuration with director Operator.
Chapter 9. Accessing an overcloud deployed with director Operator
After you deploy the overcloud with director Operator (OSPdO), you can access it and run commands with the openstack
client tool. The main access point for the overcloud is through the OpenStackClient
pod that OSPdO deploys as a part of the OpenStackControlPlane
resource that you created.
9.1. Accessing the OpenStackClient pod
The OpenStackClient
pod is the main access point to run commands against the overcloud. This pod contains the client tools and authentication details that you require to perform actions on your overcloud. To access the pod from your workstation, you must use the oc
command on your workstation to connect to the remote shell for the pod.
When you access an overcloud that you deploy without director Operator (OSPdO), you usually run the source ~/overcloudrc
command to set environment variables to access the overcloud. You do not require this step with an overcloud that you deploy with OSPdO.
Procedure
Access the remote shell for
openstackclient
:$ oc rsh -n openstack openstackclient
Change to the
cloud-admin
home directory:$ cd /home/cloud-admin
Run your
openstack
commands. For example, you can create adefault
network with the following command:$ openstack network create default
Additional resources
9.2. Accessing the overcloud dashboard
You access the dashboard of an overcloud that you deploy with director Operator (OSPdO) by using the same method as a standard overcloud: access the virtual IP address reserved by the control plane by using a web browser.
Procedure
Optional: To login as the
admin
user, obtain the admin password from theAdminPassword
parameter in thetripleo-passwords
secret:$ oc get secret tripleo-passwords -o jsonpath='{.data.tripleo-overcloud-passwords\.yaml}' | base64 -d
Retrieve the IP address reserved for the control plane from your
OpenStackNetConfig
CR:spec: ... reservations: controlplane: ipReservations: ctlplane: 172.22.0.110 external: 10.0.0.10 internal_api: 172.17.0.10 storage: 172.18.0.10 storage_mgmt: 172.19.0.10
- Open a web browser.
- Enter the IP address for the control plane in the URL field.
- Log in to the dashboard with your username and password.
Chapter 10. Scaling Compute nodes with director Operator
If you require more or fewer compute resources for your overcloud, you can scale the number of Compute nodes according to your requirements.
10.1. Adding Compute nodes to your overcloud with director Operator
To add more Compute nodes to your overcloud, you must increase the node count for the compute
OpenStackBaremetalSet
resource. When a new node is provisioned, you create a new OpenStackConfigGenerator
resource to generate a new set of Ansible playbooks, then use the OpenStackConfigVersion
to create or update the OpenStackDeploy
object to reapply the Ansible configuration to your overcloud.
Procedure
Check that you have enough hosts in a ready state in the
openshift-machine-api
namespace:$ oc get baremetalhosts -n openshift-machine-api
For more information on managing your bare-metal hosts, see Managing bare metal hosts.
Increase the
count
parameter for thecompute
OpenStackBaremetalSet
resource:$ oc patch openstackbaremetalset compute --type=merge --patch '{"spec":{"count":3}}' -n openstack
The
OpenStackBaremetalSet
resource automatically provisions the new nodes with the Red Hat Enterprise Linux base operating system.Wait until the provisioning process completes. Check the nodes periodically to determine the readiness of the nodes:
$ oc get baremetalhosts -n openshift-machine-api $ oc get openstackbaremetalset
-
Optional: Reserve static IP addresses for networks on the new Compute nodes. For more information, see Reserving static IP addresses for added Compute nodes with the
OpenStackNetConfig
CRD. -
Generate the Ansible playbooks by using
OpenStackConfigGenerator
. For more information, see Creating Ansible playbooks for overcloud configuration with the OpenStackConfigGenerator CRD. - Apply the overcloud configuration. For more information, see Applying overcloud configuration with director Operator.
Additional resources
10.2. Reserving static IP addresses for added Compute nodes with the OpenStackNetConfig
CRD
Use the OpenStackNetConfig
CRD to define IP addresses that you want to reserve for the Compute node you added to your overcloud.
Use the following commands to view the OpenStackNetConfig
CRD definition and specification schema:
$ oc describe crd openstacknetconfig $ oc explain openstacknetconfig.spec
Procedure
-
Open the
openstacknetconfig.yaml
file for the overcloud on your workstation. Add the following configuration to
openstacknetconfig.yaml
to create theOpenStackNetConfig
custom resource (CR):apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackNetConfig metadata: name: openstacknetconfig
Reserve static IP addresses for networks on specific nodes:
spec: ... reservations: controller-0: ipReservations: ctlplane: 172.22.0.120 compute-0: ipReservations: ctlplane: 172.22.0.140 internal_api: 172.17.0.40 storage: 172.18.0.40 tenant: 172.20.0.40 ... //The key for the ctlplane VIPs controlplane: ipReservations: ctlplane: 172.22.0.110 external: 10.0.0.10 internal_api: 172.17.0.10 storage: 172.18.0.10 storage_mgmt: 172.19.0.10 macReservations: {}
NoteReservations have precedence over any autogenerated IP addresses.
-
Save the
openstacknetconfig.yaml
definition file. Create the overcloud network configuration:
$ oc create -f openstacknetconfig.yaml -n openstack
Verification
To verify that the overcloud network configuration is created, view the resources for the overcloud network configuration:
$ oc get openstacknetconfig/openstacknetconfig
View the
OpenStackNetConfig
API and child resources:$ oc get openstacknetconfig/openstacknetconfig -n openstack $ oc get openstacknetattachment -n openstack $ oc get openstacknet -n openstack
If you see errors, check the underlying
network-attach-definition
and node network configuration policies:$ oc get network-attachment-definitions -n openstack $ oc get nncp
10.3. Removing Compute nodes from your overcloud with director Operator
To remove a Compute node from your overcloud, you must disable the Compute node, mark it for deletion, and decrease the node count for the compute
OpenStackBaremetalSet
resource.
If you scale the overcloud with a new node in the same role, the node reuses the host names starting with lowest ID suffix and corresponding IP reservation.
Prerequisites
- The workloads on the Compute nodes have been migrated to other Compute nodes. For more information, see Migrating virtual machine instances between Compute nodes.
Procedure
Access the remote shell for
openstackclient
:$ oc rsh -n openstack openstackclient
Identify the Compute node that you want to remove:
$ openstack compute service list
Disable the Compute service on the node to prevent the node from scheduling new instances:
$ openstack compute service set <hostname> nova-compute --disable
Annotate the bare-metal node to prevent Metal3 from starting the node:
$ oc annotate baremetalhost <node> baremetalhost.metal3.io/detached=true $ oc logs --since=1h <metal3-pod> metal3-baremetal-operator | grep -i detach $ oc get baremetalhost <node> -o json | jq .status.operationalStatus "detached"
-
Replace
<node>
with the name of theBareMetalHost
resource. -
Replace
<metal3-pod>
with the name of yourmetal3
pod.
-
Replace
Log in to the Compute node as the
root
user and shut down the bare-metal node:[root@compute-0 ~]# shutdown -h now
If the Compute node is not accessible, complete the following steps:
-
Log in to a Controller node as the
root
user. If Instance HA is enabled, disable the STONITH device for the Compute node:
[root@controller-0 ~]# pcs stonith disable <stonith_resource_name>
-
Replace
<stonith_resource_name>
with the name of the STONITH resource that corresponds to the node. The resource name uses the format<resource_agent>-<host_mac>
. You can find the resource agent and the host MAC address in theFencingConfig
section of thefencing.yaml
file.
-
Replace
- Use IPMI to power off the bare-metal node. For more information, see your hardware vendor documentation.
-
Log in to a Controller node as the
Retrieve the
BareMetalHost
resource that corresponds to the node that you want to remove:$ oc get openstackbaremetalset compute -o json | jq '.status.baremetalHosts | to_entries[] | "\(.key) => \(.value | .hostRef)"' "compute-0, openshift-worker-3" "compute-1, openshift-worker-4"
To change the status of the
annotatedForDeletion
parameter totrue
in theOpenStackBaremetalSet
resource, annotate theBareMetalHost
resource withosp-director.openstack.org/delete-host=true
:$ oc annotate -n openshift-machine-api bmh/openshift-worker-3 osp-director.openstack.org/delete-host=true --overwrite
Optional: Confirm that the
annotatedForDeletion
status has changed totrue
in theOpenStackBaremetalSet
resource:$ oc get openstackbaremetalset compute -o json -n openstack | jq .status { "baremetalHosts": { "compute-0": { "annotatedForDeletion": true, "ctlplaneIP": "192.168.25.105/24", "hostRef": "openshift-worker-3", "hostname": "compute-0", "networkDataSecretName": "compute-cloudinit-networkdata-openshift-worker-3", "provisioningState": "provisioned", "userDataSecretName": "compute-cloudinit-userdata-openshift-worker-3" }, "compute-1": { "annotatedForDeletion": false, "ctlplaneIP": "192.168.25.106/24", "hostRef": "openshift-worker-4", "hostname": "compute-1", "networkDataSecretName": "compute-cloudinit-networkdata-openshift-worker-4", "provisioningState": "provisioned", "userDataSecretName": "compute-cloudinit-userdata-openshift-worker-4" } }, "provisioningStatus": { "readyCount": 2, "reason": "All requested BaremetalHosts have been provisioned", "state": "provisioned" } }
Decrease the
count
parameter for thecompute OpenStackBaremetalSet
resource:$ oc patch openstackbaremetalset compute --type=merge --patch '{"spec":{"count":1}}' -n openstack
When you reduce the resource count of the
OpenStackBaremetalSet
resource, you trigger the corresponding controller to handle the resource deletion, which causes the following actions:-
Director Operator deletes the corresponding IP reservations from
OpenStackIPSet
andOpenStackNetConfig
for the deleted node. Director Operator flags the IP reservation entry in the
OpenStackNet
resource as deleted.$ oc get osnet ctlplane -o json -n openstack | jq .reservations { "compute-0": { "deleted": true, "ip": "172.22.0.140" }, "compute-1": { "deleted": false, "ip": "172.22.0.100" }, "controller-0": { "deleted": false, "ip": "172.22.0.120" }, "controlplane": { "deleted": false, "ip": "172.22.0.110" }, "openstackclient-0": { "deleted": false, "ip": "172.22.0.251" }
-
Director Operator deletes the corresponding IP reservations from
-
Optional: To make the IP reservations of the deleted
OpenStackBaremetalSet
resource available for other roles to use, set the value of thespec.preserveReservations
parameter to false in theOpenStackNetConfig
object. Access the remote shell for
openstackclient
:$ oc rsh openstackclient -n openstack
Remove the Compute service entries from the overcloud:
$ openstack compute service list $ openstack compute service delete <service-id>
Check the Compute network agents entries in the overcloud and remove them if they exist:
$ openstack network agent list $ for AGENT in $(openstack network agent list --host <scaled-down-node> -c ID -f value) ; do openstack network agent delete $AGENT ; done
Exit from
openstackclient
:$ exit
Chapter 11. Performing a minor update of the RHOSP overcloud with director Operator
After you update the openstackclient
pod, update the overcloud by running the overcloud and container image preparation deployments, updating your nodes, and running the overcloud update converge deployment. During a minor update, the control plane API is available.
A minor update of your Red Hat OpenStack Platform (RHOSP) environment involves updating the RPM packages and containers on the overcloud nodes. You might also need to update the configuration of some services. The data plane and control plane are fully available during the minor update. You must complete each of the following steps to update your RHOSP environment:
- Prepare your RHOSP environment for the minor update.
-
Optional: Update the
ovn-controller
container. - Update Controller nodes and composable nodes that contain Pacemaker services.
- Update Compute nodes.
- Update Red Hat Ceph Storage nodes.
- Update the Red Hat Ceph Storage cluster.
- Reboot the overcloud nodes.
Prerequisites
- You have a backup of your RHOSP deployment. For more information, see Backing up and restoring a director Operator deployed overcloud.
11.1. Preparing director Operator for a minor update
To prepare your Red Hat OpenStack Platform (RHOSP) environment to perform a minor update with director Operator (OSPdO), complete the following tasks:
- Lock the RHOSP environment to a Red Hat Enterprise Linux (RHEL) release.
- Update RHOSP repositories.
- Update the container image preparation file.
- Disable fencing in the overcloud.
11.1.1. Locking the RHOSP environment to a RHEL release
Red Hat OpenStack Platform (RHOSP) 17.1 is supported on Red Hat Enterprise Linux (RHEL) 9.2. Before you perform the update, lock the overcloud repositories to the RHEL 9.2 release to avoid upgrading the operating system to a newer minor release.
Procedure
Copy the overcloud subscription management environment file,
rhsm.yaml
, toopenstackclient
:$ oc cp rhsm.yaml openstackclient:/home/cloud-admin/rhsm.yaml
Access the remote shell for the
openstackclient
pod:$ oc rsh openstackclient
Open the
rhsm.yaml
file and check if your subscription management configuration includes therhsm_release
parameter. If therhsm_release
parameter is not present, add it and set it to9.2
:parameter_defaults: RhsmVars: … rhsm_username: "myusername" rhsm_password: "p@55w0rd!" rhsm_org_id: "1234567" rhsm_pool_ids: "1a85f9223e3d5e43013e3d6e8ff506fd" rhsm_method: "portal" rhsm_release: "9.2"
-
Save the
rhsm.yaml
file. Create a playbook named
set_release.yaml
that contains a task to lock the operating system version to RHEL 9.2 on all nodes:- hosts: all gather_facts: false tasks: - name: set release to 9.2 command: subscription-manager release --set=9.2 become: true
Run the
set_release.yaml
playbook on theopenstackclient
pod:$ ansible-playbook -i /home/cloud-admin/ctlplane-ansible-inventory /home/cloud-admin/set_release.yaml --limit Controller,Compute
Use the
--limit
option to apply the content to all RHOSP nodes. Do not run this playbook against Red Hat Ceph Storage nodes because you might have a different subscription for these nodes.NoteTo manually lock a node to a version, log in to the node and run the
subscription-manager release
command:$ sudo subscription-manager release --set=9.2
Exit the remote shell for the
openstackclient
pod:$ exit
11.1.2. Updating RHOSP repositories
Update your repositories to use Red Hat OpenStack Platform (RHOSP) 17.1.
Procedure
Open the
rhsm.yaml
file and update therhsm_repos
parameter to the correct repository versions:parameter_defaults: RhsmVars: rhsm_repos: - rhel-9-for-x86_64-baseos-eus-rpms - rhel-9-for-x86_64-appstream-eus-rpms - rhel-9-for-x86_64-highavailability-eus-rpms - openstack-17.1-for-rhel-9-x86_64-rpms - fast-datapath-for-rhel-9-x86_64-rpms
-
Save the
rhsm.yaml
file. Access the remote shell for the
openstackclient
pod:$ oc rsh openstackclient
Create a playbook named
update_rhosp_repos.yaml
that contains a task to set the repositories toRHOSP 17.1
on all nodes:- hosts: all gather_facts: false tasks: - name: change osp repos command: subscription-manager repos --enable=openstack-17.1-for-rhel-9-x86_64-rpms become: true
Run the
update_rhosp_repos.yaml
playbook on theopenstackclient
pod:$ ansible-playbook -i /home/cloud-admin/ctlplane-ansible-inventory /home/cloud-admin/update_rhosp_repos.yaml --limit Controller,Compute
Use the
--limit
option to apply the content to all RHOSP nodes. Do not run this playbook against Red Hat Ceph Storage nodes because they use a different subscription.Create a playbook named
update_ceph_repos.yaml
that contains a task to set the repositories toRHOSP 17.1
on all Red Hat Ceph Storage nodes:- hosts: all gather_facts: false tasks: - name: change ceph repos command: subscription-manager repos --enable=openstack-17.1-deployment-tools-for-rhel-9-x86_64-rpms become: true
Run the
update_ceph_repos.yaml
playbook on theopenstackclient
pod:$ ansible-playbook -i /home/cloud-admin/ctlplane-ansible-inventory /home/cloud-admin/update_ceph_repos.yaml --limit CephStorage
Use the
--limit
option to apply the content to Red Hat Ceph Storage nodes.Exit the remote shell for the
openstackclient
pod:$ exit
11.1.3. Updating the container image preparation file
The container preparation file is the file that contains the ContainerImagePrepare
parameter. You use this file to define the rules for obtaining container images for the overcloud.
Before you update your environment, check the file to ensure that you obtain the correct image versions.
Procedure
-
Edit the container preparation file. The default name for this file is
containers-prepare-parameter.yaml
. Ensure the
tag
parameter is set to17.1
for each rule set:parameter_defaults: ContainerImagePrepare: - push_destination: false set: ... tag: '17.1' tag_from_label: '{version}-{release}'
NoteIf you do not want to use a specific tag for the update, such as
17.1
or17.1.1
, remove thetag
key-value pair and specifytag_from_label
only. Thetag_from_label
tag uses the installed Red Hat OpenStack Platform (RHOSP) version to determine the value for the tag to use as part of the update process.-
Save the
containers-prepare-parameter.yaml
file.
11.1.4. Disabling fencing in the overcloud
Before you update the overcloud, ensure that fencing is disabled.
If fencing is deployed in your environment during the Controller nodes update process, the overcloud might detect certain nodes as disabled and attempt fencing operations, which can cause unintended results.
If you have enabled fencing in the overcloud, you must temporarily disable fencing for the duration of the update.
Procedure
Access the remote shell for the
openstackclient
pod:$ oc rsh openstackclient
Log in to a Controller node and run the Pacemaker command to disable fencing:
$ ssh <controller-0.ctlplane> "sudo pcs property set stonith-enabled=false"
-
Replace
<controller-0.ctlplane>
with the name of your Controller node.
-
Replace
Exit the remote shell for the
openstackclient
pod:$ exit
Additional Resources
11.2. Running the overcloud update preparation for director Operator
To prepare the overcloud for the update process, generate an update prepare configuration, which creates updated ansible playbooks and prepares the nodes for the update.
Procedure
Create an
OpenStackConfigGenerator
resource calledosconfiggenerator-update-prepare.yaml
:$ cat <<EOF > osconfiggenerator-update-prepare.yaml apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackConfigGenerator metadata: name: "update" namespace: openstack spec: gitSecret: git-secret enableFencing: false heatEnvs: - lifecycle/update-prepare.yaml heatEnvConfigMap: heat-env-config-update tarballConfigMap: tripleo-tarball-config-update EOF
Apply the configuration:
$ oc apply -f osconfiggenerator-update-prepare.yaml
- Wait until the update preparation process completes.
11.3. Updating the ovn-controller
container on all overcloud servers
If you deployed your overcloud with the Modular Layer 2 Open Virtual Network mechanism driver (ML2/OVN), update the ovn-controller
container to the latest Red Hat OpenStack Platform (RHOSP) 17.1 version. The update occurs on every overcloud server that runs the ovn-controller
container.
The following procedure updates the ovn-controller
containers on Compute nodes before it updates the ovn-northd
service on Controller nodes. If you accidentally update the ovn-northd
service before following this procedure, you might not be able to reach your virtual machine instances or create new instances or virtual networks. The following procedure restores connectivity.
Procedure
Create an
OpenStackDeploy
custom resource (CR) namedosdeploy-ovn-update.yaml
:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: ovn-update spec: configVersion: <config_version> configGenerator: update mode: externalUpdate advancedSettings: tags: - ovn
Apply the updated configuration:
$ oc apply -f osdeploy-ovn-update.yaml
-
Wait until the
ovn-controller
container update completes.
11.4. Updating all Controller nodes
Update all the Controller nodes to the latest Red Hat OpenStack Platform (RHOSP) 17.1 version.
Procedure
Create an
OpenStackDeploy
custom resource (CR) namedosdeploy-controller-update.yaml
:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: controller-update spec: configVersion: <config_version> configGenerator: update mode: update advancedSettings: limit: Controller
Apply the updated configuration:
$ oc apply -f osdeploy-controller-update.yaml
- Wait until the Controller node update completes.
11.5. Updating all Compute nodes
Update all Compute nodes to the latest Red Hat OpenStack Platform (RHOSP) 17.1 version. To update Compute nodes, create an OpenStackDeploy
custom resource (CR) with the limit: Compute
option to restrict operations only to the Compute nodes.
Procedure
Create an
OpenStackDeploy
CR namedosdeploy-compute-update.yaml
:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: compute-update spec: configVersion: <config_version> configGenerator: update mode: update advancedSettings: limit: Compute
Apply the updated configuration:
$ oc apply -f osdeploy-compute-update.yaml
- Wait until the Compute node update completes.
11.6. Updating all HCI Compute nodes
Update the Hyperconverged Infrastructure (HCI) Compute nodes to the latest Red Hat OpenStack Platform (RHOSP) 17.1 version. To update the HCI Compute nodes, create an OpenStackDeploy
custom resource (CR) with the limit: ComputeHCI
option to restrict operations to only the HCI nodes. You must also create an OpenStackDeploy
CR with the mode: external-update
and tags: ["ceph"]
options to perform an update to a containerized Red Hat Ceph Storage 4 cluster.
Procedure
Create an
OpenStackDeploy
CR namedosdeploy-computehci-update.yaml
:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: computehci-update spec: configVersion: <config_version> configGenerator: update mode: update advancedSettings: limit: ComputeHCI
Apply the updated configuration:
$ oc apply -f osdeploy-computehci-update.yaml
- Wait until the ComputeHCI node update completes.
Create an
OpenStackDeploy
CR namedosdeploy-ceph-update.yaml
:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: ceph-update spec: configVersion: <config_version> configGenerator: update mode: external-update advancedSettings: tags: - ceph
Apply the updated configuration:
$ oc apply -f osdeploy-ceph-update.yaml
- Wait until the Red Hat Ceph Storage node update completes.
11.7. Updating all Red Hat Ceph Storage nodes
Update the Red Hat Ceph Storage nodes to the latest Red Hat OpenStack Platform (RHOSP) 17.1 version.
RHOSP 17.1 is supported on RHEL 9.2. However, hosts that are mapped to the CephStorage
role update to the latest major RHEL release. For more information, see Red Hat Ceph Storage: Supported configurations.
Procedure
Create an
OpenStackDeploy
custom resource (CR) namedosdeploy-cephstorage-update.yaml
:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: cephstorage-update spec: configVersion: <config_version> configGenerator: update mode: externalUpdate advancedSettings: limit: CephStorage
Apply the updated configuration:
$ oc apply -f osdeploy-cephstorage-update.yaml
- Wait until the Red Hat Ceph Storage node update completes.
Create an
OpenStackDeploy
CR namedosdeploy-ceph-update.yaml
:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: ceph-update spec: configVersion: <config_version> configGenerator: update mode: externalUpdate advancedSettings: tags: - ceph
Apply the updated configuration:
$ oc apply -f osdeploy-ceph-update.yaml
- Wait until the Red Hat Ceph Storage node update completes.
11.8. Updating the Red Hat Ceph Storage cluster
Update the director-deployed Red Hat Ceph Storage cluster to the latest version that is compatible with Red Hat OpenStack Platform (RHOSP) 17.1 by using the cephadm
Orchestrator.
Procedure
Access the remote shell for the
openstackclient
pod:$ oc rsh openstackclient
Log in to a Controller node:
$ ssh <controller-0.ctlplane>
-
Replace
<controller-0.ctlplane>
with the name of your Controller node.
-
Replace
Log into the
cephadm
shell:[cloud-admin@controller-0 ~]$ sudo cephadm shell
-
Upgrade your Red Hat Ceph Storage cluster by using
cephadm
. For more information, see Upgrade a Red Hat Ceph Storage cluster using cephadm in the Red Hat Ceph Storage 6 Upgrade Guide. Exit the remote shell for the
openstackclient
pod:$ exit
11.9. Performing online database updates
Some overcloud components require an online update or migration of their databases tables. Online database updates apply to the following components:
- Block Storage service (cinder)
- Compute service (nova)
Procedure
Create an
OpenStackDeploy
custom resource (CR) namedosdeploy-online-migration.yaml
:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: online-migration spec: configVersion: <config_version> configGenerator: update mode: external-update advancedSettings: tags: - online_upgrade
Apply the updated configuration:
$ oc apply -f osdeploy-online-migration.yaml
11.10. Re-enabling fencing in the overcloud
To update to the latest Red Hat OpenStack Platform (RHOSP) 17.1, you must re-enable fencing in the overcloud.
Procedure
Access the remote shell for the
openstackclient
pod:$ oc rsh openstackclient
Log in to a Controller node and run the Pacemaker command to enable fencing:
$ ssh <controller-0.ctlplane> "sudo pcs property set stonith-enabled=true"
-
Replace
<controller-0.ctlplane>
with the name of your Controller node.
-
Replace
Exit the remote shell for the
openstackclient
pod:$ exit
11.11. Rebooting the overcloud
After you perform a minor Red Hat OpenStack Platform (RHOSP) update to the latest 17.1 version, reboot your overcloud. The reboot refreshes the nodes with any associated kernel, system-level, and container component updates. These updates provide performance and security benefits. Plan downtime to perform the reboot procedures.
Use the following guidance to understand how to reboot different node types:
- If you reboot all nodes in one role, reboot each node individually. If you reboot all nodes in a role simultaneously, service downtime can occur during the reboot operation.
Complete the reboot procedures on the nodes in the following order:
11.11.1. Rebooting Controller and composable nodes
Reboot Controller nodes and standalone nodes based on composable roles, and exclude Compute nodes and Ceph Storage nodes.
Procedure
- Log in to the node that you want to reboot.
Optional: If the node uses Pacemaker resources, stop the cluster:
[tripleo-admin@overcloud-controller-0 ~]$ sudo pcs cluster stop
Reboot the node:
[tripleo-admin@overcloud-controller-0 ~]$ sudo reboot
- Wait until the node boots.
Verification
Verify that the services are enabled.
If the node uses Pacemaker services, check that the node has rejoined the cluster:
[tripleo-admin@overcloud-controller-0 ~]$ sudo pcs status
If the node uses Systemd services, check that all services are enabled:
[tripleo-admin@overcloud-controller-0 ~]$ sudo systemctl status
If the node uses containerized services, check that all containers on the node are active:
[tripleo-admin@overcloud-controller-0 ~]$ sudo podman ps
11.11.2. Rebooting a Ceph Storage (OSD) cluster
Complete the following steps to reboot a cluster of Ceph Storage (OSD) nodes.
Prerequisites
On a Ceph Monitor or Controller node that is running the
ceph-mon
service, check that the Red Hat Ceph Storage cluster status is healthy and the pg status isactive+clean
:$ sudo cephadm -- shell ceph status
If the Ceph cluster is healthy, it returns a status of
HEALTH_OK
.If the Ceph cluster status is unhealthy, it returns a status of
HEALTH_WARN
orHEALTH_ERR
. For troubleshooting guidance, see the Red Hat Ceph Storage 5 Troubleshooting Guide or the Red Hat Ceph Storage 6 Troubleshooting Guide.
Procedure
Log in to a Ceph Monitor or Controller node that is running the
ceph-mon
service, and disable Ceph Storage cluster rebalancing temporarily:$ sudo cephadm shell -- ceph osd set noout $ sudo cephadm shell -- ceph osd set norebalance
NoteIf you have a multistack or distributed compute node (DCN) architecture, you must specify the Ceph cluster name when you set the
noout
andnorebalance
flags. For example:sudo cephadm shell -c /etc/ceph/<cluster>.conf -k /etc/ceph/<cluster>.client.keyring
.- Select the first Ceph Storage node that you want to reboot and log in to the node.
Reboot the node:
$ sudo reboot
- Wait until the node boots.
Log in to the node and check the Ceph cluster status:
$ sudo cephadm -- shell ceph status
Check that the
pgmap
reports allpgs
as normal (active+clean
).- Log out of the node, reboot the next node, and check its status. Repeat this process until you have rebooted all Ceph Storage nodes.
When complete, log in to a Ceph Monitor or Controller node that is running the
ceph-mon
service and enable Ceph cluster rebalancing:$ sudo cephadm shell -- ceph osd unset noout $ sudo cephadm shell -- ceph osd unset norebalance
NoteIf you have a multistack or distributed compute node (DCN) architecture, you must specify the Ceph cluster name when you unset the
noout
andnorebalance
flags. For example:sudo cephadm shell -c /etc/ceph/<cluster>.conf -k /etc/ceph/<cluster>.client.keyring
Perform a final status check to verify that the cluster reports
HEALTH_OK
:$ sudo cephadm shell ceph status
11.11.3. Rebooting Compute nodes
To ensure minimal downtime of instances in your Red Hat OpenStack Platform environment, the Migrating instances workflow outlines the steps you must complete to migrate instances from the Compute node that you want to reboot.
Migrating instances workflow
- Decide whether to migrate instances to another Compute node before rebooting the node.
- Select and disable the Compute node that you want to reboot so that it does not provision new instances.
- Migrate the instances to another Compute node.
- Reboot the empty Compute node.
- Enable the empty Compute node.
Prerequisites
Before you reboot the Compute node, you must decide whether to migrate instances to another Compute node while the node is rebooting.
Review the list of migration constraints that you might encounter when you migrate virtual machine instances between Compute nodes. For more information, see Migration constraints in Configuring the Compute service for instance creation.
NoteIf you have a Multi-RHEL environment, and you want to migrate virtual machines from a Compute node that is running RHEL 9.2 to a Compute node that is running RHEL 8.4, only cold migration is supported. For more information about cold migration, see Cold migrating an instance in Configuring the Compute service for instance creation.
If you cannot migrate the instances, you can set the following core template parameters to control the state of the instances after the Compute node reboots:
NovaResumeGuestsStateOnHostBoot
-
Determines whether to return instances to the same state on the Compute node after reboot. When set to
False
, the instances remain down and you must start them manually. The default value isFalse
. NovaResumeGuestsShutdownTimeout
Number of seconds to wait for an instance to shut down before rebooting. It is not recommended to set this value to
0
. The default value is300
.For more information about overcloud parameters and their usage, see Overcloud parameters.
Procedure
-
Log in to the undercloud as the
stack
user. Retrieve a list of your Compute nodes to identify the host name of the node that you want to reboot:
(undercloud)$ source ~/overcloudrc (overcloud)$ openstack compute service list
Identify the host name of the Compute node that you want to reboot.
Disable the Compute service on the Compute node that you want to reboot:
(overcloud)$ openstack compute service list (overcloud)$ openstack compute service set <hostname> nova-compute --disable
-
Replace
<hostname>
with the host name of your Compute node.
-
Replace
List all instances on the Compute node:
(overcloud)$ openstack server list --host <hostname> --all-projects
Optional: To migrate the instances to another Compute node, complete the following steps:
If you decide to migrate the instances to another Compute node, use one of the following commands:
To migrate the instance to a different host, run the following command:
(overcloud) $ openstack server migrate <instance_id> --live <target_host> --wait
-
Replace
<instance_id>
with your instance ID. -
Replace
<target_host>
with the host that you are migrating the instance to.
-
Replace
Let
nova-scheduler
automatically select the target host:(overcloud) $ nova live-migration <instance_id>
Live migrate all instances at once:
$ nova host-evacuate-live <hostname>
NoteThe
nova
command might cause some deprecation warnings, which are safe to ignore.
- Wait until migration completes.
Confirm that the migration was successful:
(overcloud) $ openstack server list --host <hostname> --all-projects
- Continue to migrate instances until none remain on the Compute node.
Log in to the Compute node and reboot the node:
[tripleo-admin@overcloud-compute-0 ~]$ sudo reboot
- Wait until the node boots.
Re-enable the Compute node:
$ source ~/overcloudrc (overcloud) $ openstack compute service set <hostname> nova-compute --enable
Check that the Compute node is enabled:
(overcloud) $ openstack compute service list
11.11.4. Validating RHOSP after the overcloud update
After you update your Red Hat OpenStack Platform (RHOSP) environment, validate your overcloud with the tripleo-validations
playbooks.
For more information about validations, see Using the validation framework in Installing and managing Red Hat OpenStack Platform with director.
Procedure
-
Log in to the undercloud host as the
stack
user. Source the
stackrc
undercloud credentials file:$ source ~/stackrc
Run the validation:
$ validation run -i ~/overcloud-deploy/<stack>/tripleo-ansible-inventory.yaml --group post-update
- Replace <stack> with the name of the stack.
Verification
- To view the results of the validation report, see Viewing validation history in Installing and managing Red Hat OpenStack Platform with director.
If a host is not found when you run a validation, the command reports the status as SKIPPED
. A status of SKIPPED
means that the validation is not executed, which is expected. Additionally, if a validation’s pass criteria is not met, the command reports the status as FAILED
. A FAILED
validation does not prevent you from using your updated RHOSP environment. However, a FAILED
validation can indicate an issue with your environment.
Chapter 12. Deploying TLS for public endpoints using director Operator
Deploy the overcloud using TLS to create public endpoint IPs or DNS names for director Operator (OSPdO).
Prerequisites
- You have installed OSPdO on an operational Red Hat OpenShift Container Platform (RHOCP) cluster.
-
You have installed the
oc
command line tool on your workstation. - You have created the certificate authority, key, and certificate. For more information, see Enabling SSL/TLS on overcloud public endpoints.
12.1. TLS for public endpoint IP addresses
To reference public endpoint IP addresses, add your CA certificates to the openstackclient
pod by creating a ConfigMap
resource to store the CA certificates, then referencing that ConfigMap
resource in the OpenStackControlPlane
resource.
Procedure
Create a
ConfigMap
resource to store the CA certificates:apiVersion: v1 kind: ConfigMap metadata: name: cacerts namespace: openstack data: local_CA: | -----BEGIN CERTIFICATE----- … -----END CERTIFICATE----- another_CA: | -----BEGIN CERTIFICATE----- … -----END CERTIFICATE-----
Create the
OpenStackControlPlane
resource and reference theConfigMap
resource:apiVersion: osp-director.openstack.org/v1beta2 kind: OpenStackControlPlane metadata: name: <overcloud> namespace: openstack spec: caConfigMap: cacerts
-
Replace
<overcloud>
with the name of your overcloud control plane.
-
Replace
-
Create a file in the
~/custom_environment_files
directory namedtls-certs.yaml
, that specifies the generated certificates for the deployment by using theSSLCertificate
,SSLIntermediateCertificate
,SSLKey
, andCAMap
parameters. Update the
heatEnvConfigMap
to add thetls-certs.yaml
file:$ oc create configmap -n openstack heat-env-config --from-file=~/custom_environment_files/ --dry-run=client -o yaml | oc apply -f -
Create an
OpenStackConfigGenerator
resource and add the requiredheatEnvs
configuration files to configure TLS for public endpoint IPs:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackConfigGenerator … spec: … heatEnvs: - ssl/tls-endpoints-public-ip.yaml - ssl/enable-tls.yaml … heatEnvConfigMap: heat-env-config tarballConfigMap: tripleo-tarball-config
-
Generate the Ansible playbooks by using
OpenStackConfigGenerator
and apply the overcloud configuration. For more information, see Configuring and deploying the overcloud with director Operator.
12.2. TLS for public endpoint DNS names
To reference public endpoint DNS names, add your CA certificates to the openstackclient
pod by creating a ConfigMap
resource to store the CA certificates, then referencing that ConfigMap
resource in the OpenStackControlPlane
resource.
Procedure
Create a
ConfigMap
resource to store the CA certificates:apiVersion: v1 kind: ConfigMap metadata: name: cacerts namespace: openstack data: local_CA: | -----BEGIN CERTIFICATE----- … -----END CERTIFICATE----- another_CA: | -----BEGIN CERTIFICATE----- … -----END CERTIFICATE-----
Create the
OpenStackControlPlane
resource and reference theConfigMap
resource:apiVersion: osp-director.openstack.org/v1beta2 kind: OpenStackControlPlane metadata: name: <overcloud> namespace: openstack spec: caConfigMap: cacerts
-
Replace
<overcloud>
with the name of your overcloud control plane.
-
Replace
-
Create a file in the
~/custom_environment_files
directory namedtls-certs.yaml
, that specifies the generated certificates for the deployment by using theSSLCertificate
,SSLIntermediateCertificate
,SSLKey
, andCAMap
parameters. Update the
heatEnvConfigMap
to add thetls-certs.yaml
file:$ oc create configmap -n openstack heat-env-config --from-file=~/custom_environment_files/ --dry-run=client -o yaml | oc apply -f -
Create an
OpenStackConfigGenerator
resource and add the requiredheatEnvs
configuration files to configure TLS for public endpoint DNS names:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackConfigGenerator … spec: … heatEnvs: - ssl/tls-endpoints-public-dns.yaml - ssl/enable-tls.yaml … heatEnvConfigMap: heat-env-config tarballConfigMap: tripleo-tarball-config
-
Generate the Ansible playbooks by using
OpenStackConfigGenerator
and apply the overcloud configuration. For more information, see Configuring and deploying the overcloud with director Operator.
Chapter 13. Changing service account passwords using director Operator
Red Hat OpenStack Platform (RHOSP) services and the databases that they use are authenticated by their Identity service (keystone) credentials. The Identity service generates these RHOSP passwords during the initial RHOSP deployment process. You might be required to periodically update passwords for threat mitigation or security compliance. You can use tools native to director Operator (OSPdO) to change many of the generated passwords after your RHOSP environment is deployed.
13.1. Rotating overcloud service account passwords with director Operator
You can rotate the overcloud service account passwords used with a director Operator (OSPdO) deployed Red Hat OpenStack Platform (RHOSP) environment.
Procedure
Create a backup of the current
tripleo-passwords
secret:$ oc get secret tripleo-passwords -n openstack -o yaml > tripleo-passwords_backup.yaml
Create a plain text file named
tripleo-overcloud-passwords_preserve_list
to specify that the passwords for the following services should not be rotated:parameter_defaults BarbicanSimpleCryptoKek KeystoneCredential0 KeystoneCredential1 KeystoneFernetKey0 KeystoneFernetKey1 KeystoneFernetKeys CephClientKey CephClusterFSID CephManilaClientKey CephRgwKey HeatAuthEncryptionKey MysqlClustercheckPassword MysqlMariabackupPassword PacemakerRemoteAuthkey PcsdPassword
You can add additional services to this list if there are other services for which you want to preserve the password.
Create a password parameter file,
tripleo-overcloud-passwords.yaml
, that lists the passwords that should not be modified:$ oc get secret tripleo-passwords -n openstack \ -o jsonpath='{.data.tripleo-overcloud-passwords\.yaml}' \ | base64 -d | grep -f ./tripleo-overcloud-passwords_preserve_list > tripleo-overcloud-passwords.yaml
-
Validate that the
tripleo-overcloud-passwords.yaml
file contains the passwords that you do not want to rotate. Update the
tripleo-password
secret:$ oc create secret generic tripleo-passwords -n openstack \ --from-file=./tripleo-overcloud-passwords.yaml \ --dry-run=client -o yaml | oc apply -f -
- Create Ansible playbooks to configure the overcloud with the OpenStackConfigGenerator CRD. For more information, see Creating Ansible playbooks for overcloud configuration with the OpenStackConfigGenerator CRD.
- Apply the updated configuration. For more information, see Applying overcloud configuration with director Operator.
Verification
Compare the new NovaPassword
in the secret to what is now installed on the Controller node.
Get the password from the updated secret:
$ oc get secret tripleo-passwords -n openstack -o jsonpath='{.data.tripleo-overcloud-passwords\.yaml}' | base64 -d | grep NovaPassword
Example output:
NovaPassword: hp4xpt7t2p79ktqjjnxpqwbp6
Retrieve the password for the Compute service (nova) running on the Controller nodes:
Access the
openstackclient
remote shell:$ oc rsh openstackclient -n openstack
Ensure that you are in the home directory:
$ cd
Retrieve the Compute service password:
$ ansible -i /home/cloud-admin/ctlplane-ansible-inventory Controller -b -a "grep ^connection /var/lib/config-data/puppet-generated/nova/etc/nova/nova.conf"
Example output:
172.22.0.120 | CHANGED | rc=0 >> connection=mysql+pymysql://nova_api:hp4xpt7t2p79ktqjjnxpqwbp6@172.17.0.10/nova_api?read_default_file=/etc/my.cnf.d/tripleo.cnf&read_default_group=tripleo connection=mysql+pymysql://nova:hp4xpt7t2p79ktqjjnxpqwbp6@172.17.0.10/nova?read_default_file=/etc/my.cnf.d/tripleo.cnf&read_default_group=tripleo
Chapter 14. Deploying nodes with spine-leaf configuration by using director Operator
Deploy nodes with spine-leaf networking architecture to replicate an extensive network topology within your environment. Current restrictions allow only one provisioning network for Metal3
.
14.1. Creating or updating the OpenStackNetConfig custom resource to define all subnets
Define your OpenStackNetConfig
custom resource (CR) and specify the subnets for the overcloud networks. Red Hat OpenStack Platform (RHOSP) director Operator (OSPdO) then renders the configuration and creates, or updates, the network topology.
Prerequisites
- You have installed OSPdO on an operational Red Hat OpenShift Container Platform (RHOCP) cluster.
-
You have installed the
oc
command line tool on your workstation.
Procedure
Create a configuration file named
openstacknetconfig.yaml
:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackNetConfig metadata: name: openstacknetconfig spec: attachConfigurations: br-osp: nodeNetworkConfigurationPolicy: nodeSelector: node-role.kubernetes.io/worker: "" desiredState: interfaces: - bridge: options: stp: enabled: false port: - name: enp7s0 description: Linux bridge with enp7s0 as a port name: br-osp state: up type: linux-bridge mtu: 1500 br-ex: nodeNetworkConfigurationPolicy: nodeSelector: node-role.kubernetes.io/worker: "" desiredState: interfaces: - bridge: options: stp: enabled: false port: - name: enp6s0 description: Linux bridge with enp6s0 as a port name: br-ex state: up type: linux-bridge mtu: 1500 # optional DnsServers list dnsServers: - 192.168.25.1 # optional DnsSearchDomains list dnsSearchDomains: - osptest.test.metalkube.org - some.other.domain # DomainName of the OSP environment domainName: osptest.test.metalkube.org networks: - name: Control nameLower: ctlplane subnets: - name: ctlplane ipv4: allocationEnd: 192.168.25.250 allocationStart: 192.168.25.100 cidr: 192.168.25.0/24 gateway: 192.168.25.1 attachConfiguration: br-osp - name: InternalApi nameLower: internal_api mtu: 1350 subnets: - name: internal_api ipv4: allocationEnd: 172.17.0.250 allocationStart: 172.17.0.10 cidr: 172.17.0.0/24 routes: - destination: 172.17.1.0/24 nexthop: 172.17.0.1 - destination: 172.17.2.0/24 nexthop: 172.17.0.1 vlan: 20 attachConfiguration: br-osp - name: internal_api_leaf1 ipv4: allocationEnd: 172.17.1.250 allocationStart: 172.17.1.10 cidr: 172.17.1.0/24 routes: - destination: 172.17.0.0/24 nexthop: 172.17.1.1 - destination: 172.17.2.0/24 nexthop: 172.17.1.1 vlan: 21 attachConfiguration: br-osp - name: internal_api_leaf2 ipv4: allocationEnd: 172.17.2.250 allocationStart: 172.17.2.10 cidr: 172.17.2.0/24 routes: - destination: 172.17.1.0/24 nexthop: 172.17.2.1 - destination: 172.17.0.0/24 nexthop: 172.17.2.1 vlan: 22 attachConfiguration: br-osp - name: External nameLower: external subnets: - name: external ipv4: allocationEnd: 10.0.0.250 allocationStart: 10.0.0.10 cidr: 10.0.0.0/24 gateway: 10.0.0.1 attachConfiguration: br-ex - name: Storage nameLower: storage mtu: 1350 subnets: - name: storage ipv4: allocationEnd: 172.18.0.250 allocationStart: 172.18.0.10 cidr: 172.18.0.0/24 routes: - destination: 172.18.1.0/24 nexthop: 172.18.0.1 - destination: 172.18.2.0/24 nexthop: 172.18.0.1 vlan: 30 attachConfiguration: br-osp - name: storage_leaf1 ipv4: allocationEnd: 172.18.1.250 allocationStart: 172.18.1.10 cidr: 172.18.1.0/24 routes: - destination: 172.18.0.0/24 nexthop: 172.18.1.1 - destination: 172.18.2.0/24 nexthop: 172.18.1.1 vlan: 31 attachConfiguration: br-osp - name: storage_leaf2 ipv4: allocationEnd: 172.18.2.250 allocationStart: 172.18.2.10 cidr: 172.18.2.0/24 routes: - destination: 172.18.0.0/24 nexthop: 172.18.2.1 - destination: 172.18.1.0/24 nexthop: 172.18.2.1 vlan: 32 attachConfiguration: br-osp - name: StorageMgmt nameLower: storage_mgmt mtu: 1350 subnets: - name: storage_mgmt ipv4: allocationEnd: 172.19.0.250 allocationStart: 172.19.0.10 cidr: 172.19.0.0/24 routes: - destination: 172.19.1.0/24 nexthop: 172.19.0.1 - destination: 172.19.2.0/24 nexthop: 172.19.0.1 vlan: 40 attachConfiguration: br-osp - name: storage_mgmt_leaf1 ipv4: allocationEnd: 172.19.1.250 allocationStart: 172.19.1.10 cidr: 172.19.1.0/24 routes: - destination: 172.19.0.0/24 nexthop: 172.19.1.1 - destination: 172.19.2.0/24 nexthop: 172.19.1.1 vlan: 41 attachConfiguration: br-osp - name: storage_mgmt_leaf2 ipv4: allocationEnd: 172.19.2.250 allocationStart: 172.19.2.10 cidr: 172.19.2.0/24 routes: - destination: 172.19.0.0/24 nexthop: 172.19.2.1 - destination: 172.19.1.0/24 nexthop: 172.19.2.1 vlan: 42 attachConfiguration: br-osp - name: Tenant nameLower: tenant vip: False mtu: 1350 subnets: - name: tenant ipv4: allocationEnd: 172.20.0.250 allocationStart: 172.20.0.10 cidr: 172.20.0.0/24 routes: - destination: 172.20.1.0/24 nexthop: 172.20.0.1 - destination: 172.20.2.0/24 nexthop: 172.20.0.1 vlan: 50 attachConfiguration: br-osp - name: tenant_leaf1 ipv4: allocationEnd: 172.20.1.250 allocationStart: 172.20.1.10 cidr: 172.20.1.0/24 routes: - destination: 172.20.0.0/24 nexthop: 172.20.1.1 - destination: 172.20.2.0/24 nexthop: 172.20.1.1 vlan: 51 attachConfiguration: br-osp - name: tenant_leaf2 ipv4: allocationEnd: 172.20.2.250 allocationStart: 172.20.2.10 cidr: 172.20.2.0/24 routes: - destination: 172.20.0.0/24 nexthop: 172.20.2.1 - destination: 172.20.1.0/24 nexthop: 172.20.2.1 vlan: 52 attachConfiguration: br-osp
Create the internal API network:
$ oc create -f openstacknetconfig.yaml -n openstack
Verify that the resources and child resources for the
OpenStackNetConfig
resource are created:$ oc get openstacknetconfig/openstacknetconfig -n openstack $ oc get openstacknetattachment -n openstack $ oc get openstacknet -n openstack
14.2. Add roles for leaf networks to your deployment
To add roles for the leaf networks to your deployment, update the roles_data.yaml
configuration file. If the leaf network roles have different NIC configurations, you can create Ansible NIC templates for each role to configure the spine-leaf networking, register the NIC templates, and create the ConfigMap
custom resource.
You must use roles_data.yaml
as the filename.
Procedure
Update the
roles_data.yaml
file:... ############################################################################### # Role: ComputeLeaf1 # ############################################################################### - name: ComputeLeaf1 description: | Basic ComputeLeaf1 Node role # Create external Neutron bridge (unset if using ML2/OVS without DVR) tags: - compute - external_bridge networks: InternalApi: subnet: internal_api_leaf1 Tenant: subnet: tenant_leaf1 Storage: subnet: storage_leaf1 HostnameFormatDefault: '%stackname%-novacompute-leaf1-%index%' ... ############################################################################### # Role: ComputeLeaf2 # ############################################################################### - name: ComputeLeaf2 description: | Basic ComputeLeaf1 Node role # Create external Neutron bridge (unset if using ML2/OVS without DVR) tags: - compute - external_bridge networks: InternalApi: subnet: internal_api_leaf2 Tenant: subnet: tenant_leaf2 Storage: subnet: storage_leaf2 HostnameFormatDefault: '%stackname%-novacompute-leaf2-%index%' ...
- Create a NIC template for each Compute role. For example Ansible NIC templates, see https://github.com/openstack/tripleo-ansible/tree/stable/wallaby/tripleo_ansible/roles/tripleo_network_config/templates.
Add the NIC templates for the new nodes to an environment file:
parameter_defaults: ComputeNetworkConfigTemplate: 'multiple_nics_vlans_dvr.j2' ComputeLeaf1NetworkConfigTemplate: 'multiple_nics_vlans_dvr.j2' ComputeLeaf2NetworkConfigTemplate: 'multiple_nics_compute_leaf_2_vlans_dvr.j2'
In the
~/custom_environment_files
directory, archive theroles_data.yaml
file, the environment file, and the NIC templates into a tarball:$ tar -cvzf custom-spine-leaf-config.tar.gz *.yaml
Create the
tripleo-tarball-config
ConfigMap
resource:$ oc create configmap tripleo-tarball-config --from-file=custom-spine-leaf-config.tar.gz -n openstack
14.3. Deploying the overcloud with multiple routed networks
To deploy the overcloud with multiple sets of routed networking, create the control plane and the Compute nodes for the spine-leaf network, and then render and apply the Ansible playbooks. To create the control plane, specify the resources for the Controller nodes. To create the Compute nodes for the leafs from bare-metal machines, include the resource specification in the OpenStackBaremetalSet
custom resource.
Procedure
Create a file named
openstack-controller.yaml
on your workstation. Include the resource specification for the Controller nodes. The following example shows a specification for a control plane that consists of three Controller nodes:apiVersion: osp-director.openstack.org/v1beta2 kind: OpenStackControlPlane metadata: name: overcloud namespace: openstack spec: gitSecret: git-secret openStackClientImageURL: registry.redhat.io/rhosp-rhel9/openstack-tripleoclient:17.1 openStackClientNetworks: - ctlplane - external - internal_api - internal_api_leaf1 # optionally the openstackclient can also be connected to subnets openStackClientStorageClass: host-nfs-storageclass passwordSecret: userpassword domainName: ostest.test.metalkube.org virtualMachineRoles: Controller: roleName: Controller roleCount: 1 networks: - ctlplane - internal_api - external - tenant - storage - storage_mgmt cores: 6 memory: 20 rootDisk: diskSize: 500 baseImageVolumeName: openstack-base-img storageClass: host-nfs-storageclass storageAccessMode: ReadWriteMany storageVolumeMode: Filesystem enableFencing: False
Create the control plane:
$ oc create -f openstack-controller.yaml -n openstack
-
Wait until Red Hat OpenShift Container Platform (RHOCP) creates the resources related to the
OpenStackControlPlane
resource. Create a file on your workstation for each Compute leaf, for example,
openstack-computeleaf1.yaml
. Include the resource specification for the Compute nodes for the leaf. The following example shows a specification for one Compute leaf that includes one Compute node:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackBaremetalSet metadata: name: computeleaf1 namespace: openstack spec: # How many nodes to provision count: 1 # The image to install on the provisioned nodes baseImageUrl: http://<source_host>/rhel-9.2-x86_64-kvm.qcow2 # The secret containing the SSH pub key to place on the provisioned nodes deploymentSSHSecret: osp-controlplane-ssh-keys # The interface on the nodes that will be assigned an IP from the mgmtCidr ctlplaneInterface: enp7s0 # Networks to associate with this host networks: - ctlplane - internal_api_leaf1 - external - tenant_leaf1 - storage_leaf1 roleName: ComputeLeaf1 passwordSecret: userpassword
Create the Compute nodes for each leaf:
$ oc create -f openstack-computeleaf1.yaml -n openstack
-
Generate the Ansible playbooks by using
OpenStackConfigGenerator
and apply the overcloud configuration. For more information, see Configuring and deploying the overcloud with director Operator.
Verification
View the resource for the control plane:
$ oc get openstackcontrolplane/overcloud -n openstack
View the
OpenStackVMSet
resources to verify the creation of the control plane virtual machine (VM) set:$ oc get openstackvmsets -n openstack
View the VM resources to verify the creation of the control plane VMs in OpenShift Virtualization:
$ oc get virtualmachines -n openstack
Test access to the
openstackclient
pod remote shell:$ oc rsh -n openstack openstackclient
View the resource for each Compute leaf:
$ oc get openstackbaremetalset/computeleaf1 -n openstack
View the bare-metal machines managed by RHOCP to verify the creation of the Compute nodes:
$ oc get baremetalhosts -n openshift-machine-api
Chapter 15. Backing up and restoring a director Operator deployed overcloud
Red Hat OpenStack Platform (RHOSP) director Operator (OSPdO) provides custom resource definitions (CRDs) for backing up and restoring a deployment. You do not have to manually export and import multiple configurations. OSPdO knows which custom resources (CRs), including the ConfigMap
and Secret
CRs, that it needs to create a complete backup because it is aware of the state of all resources. Therefore, OSPdO does not backup any configuration that is in an incomplete or error state.
To backup and restore an OSPdO deployment, you create an OpenStackBackupRequest
CR to initiate the creation or restoration of a backup. Your OpenStackBackupRequest
CR creates the OpenStackBackup
CR that stores the backup of the custom resources (CRs), the ConfigMap
and the Secret
configurations for the specified namespace.
15.1. Backing up director Operator
To create a backup you must create an OpenStackBackupRequest
custom resource (CR) for the namespace. The OpenStackBackup
CR is created when the OpenStackBackupRequest
object is created in save
mode.
Procedure
-
Create a file named
openstack_backup.yaml
on your workstation. Add the following configuration to your
openstack_backup.yaml
file to create theOpenStackBackupRequest
custom resource (CR):apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackBackupRequest metadata: name: openstackbackupsave namespace: openstack spec: mode: save 1 additionalConfigMaps: [] 2 additionalSecrets: [] 3
NoteOSPdO attempts to include all
ConfigMap
andSecret
objects associated with the OSPdO CRs in the namespace, such asOpenStackControlPlane
andOpenStackBaremetalSet
. You do not need to include those in the additional lists.-
Save the
openstack_backup.yaml
file. Create the
OpenStackBackupRequest
CR:$ oc create -f openstack_backup.yaml -n openstack
Monitor the creation status of the
OpenStackBackupRequest
CR:$ oc get openstackbackuprequest openstackbackupsave -n openstack
The
Quiescing
state indicates that OSPdO is waiting for the CRs to reach their finished state. The number of CRs can affect how long it takes to finish creating the backup.NAME OPERATION SOURCE STATUS COMPLETION TIMESTAMP openstackbackupsave save Quiescing
If the status remains in the
Quiescing
state for longer than expected, you can investigate the OSPdO logs to check progress:$ oc logs <operator_pod> -c manager -f 2022-01-11T18:26:15.180Z INFO controllers.OpenStackBackupRequest Quiesce for save for OpenStackBackupRequest openstackbackupsave is waiting for: [OpenStackBaremetalSet: compute, OpenStackControlPlane: overcloud, OpenStackVMSet: controller]
-
Replace
<operator_pod>
with the name of the Operator pod.
-
Replace
The
Saved
state indicates that theOpenStackBackup
CR is created.NAME OPERATION SOURCE STATUS COMPLETION TIMESTAMP openstackbackupsave save Saved 2022-01-11T19:12:58Z
The
Error
state indicates the backup has failed to create. Review the request contents to find the error:$ oc get openstackbackuprequest openstackbackupsave -o yaml -n openstack
View the
OpenStackBackup
resource to confirm it exists:$ oc get openstackbackup -n openstack NAME AGE openstackbackupsave-1641928378 6m7s
15.2. Restoring director Operator from a backup
When you request to restore a backup, Red Hat OpenStack Platform (RHOSP) director Operator (OSPdO) takes the contents of the specified OpenStackBackup
resource and attempts to apply them to all existing custom resources (CRs), ConfigMap
and Secret
resources present within the namespace. OSPdO overwrites any existing resources in the namespace, and creates new resources for those not found within the namespace.
Procedure
List the available backups:
$ oc get osbackup
Inspect the details of a specific backup:
$ oc get backup <name> -o yaml
-
Replace
<name>
with the name of the backup you want to inspect.
-
Replace
-
Create a file named
openstack_restore.yaml
on your workstation. Add the following configuration to your
openstack_restore.yaml
file to create theOpenStackBackupRequest
custom resource (CR):apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackBackupRequest metadata: name: openstackbackuprestore namespace: openstack spec: mode: <mode> restoreSource: <restore_source>
Replace
<mode>
with one of the following options:-
restore
: Requests a restore from an existingOpenStackBackup
. -
cleanRestore
: Completely wipes the existing OSPdO resources within the namespace before restoring and creating new resources from the existingOpenStackBackup
.
-
-
Replace
<restore_source>
with the ID of theOpenStackBackup
to restore, for example,openstackbackupsave-1641928378
.
-
Save the
openstack_restore.yaml
file. Create the
OpenStackBackupRequest
CR:$ oc create -f openstack_restore.yaml -n openstack
Monitor the creation status of the
OpenStackBackupRequest
CR:$ oc get openstackbackuprequest openstackbackuprestore -n openstack
The
Loading
state indicates that all resources from theOpenStackBackup
are being applied against the cluster.NAME OPERATION SOURCE STATUS COMPLETION TIMESTAMP openstackbackuprestore restore openstackbackupsave-1641928378 Loading
The
Reconciling
state indicates that all resources are loaded and OSPdO has begun reconciling to attempt to provision all resources.NAME OPERATION SOURCE STATUS COMPLETION TIMESTAMP openstackbackuprestore restore openstackbackupsave-1641928378 Reconciling
The
Restored
state indicates that theOpenStackBackup
CR has been restored.NAME OPERATION SOURCE STATUS COMPLETION TIMESTAMP openstackbackuprestore restore openstackbackupsave-1641928378 Restored 2022-01-12T13:48:57Z
The
Error
state indicates the restoration has failed. Review the request contents to find the error:$ oc get openstackbackuprequest openstackbackuprestore -o yaml -n openstack
Chapter 16. Change resources on virtual machines using director Operator
To change the CPU, RAM, and disk resources of an OpenStackVMSet
custom resource (CR), use the OpenStackControlPlane
CRD.
16.1. Change the CPU or RAM of an OpenStackVMSet
CR
You can use the OpenStackControlPlane
CRD to change the CPU or RAM of an OpenStackVMSet
custom resource (CR).
Procedure
Change the number of Controller virtualMachineRole cores to 8:
$ oc patch -n openstack osctlplane overcloud --type='json' -p='[{"op": "add", "path": "/spec/virtualMachineRoles/controller/cores", "value": 8 }]'
Change the Controller virtualMachineRole RAM size to 22GB:
$ oc patch -n openstack osctlplane overcloud --type='json' -p='[{"op": "add", "path": "/spec/virtualMachineRoles/controller/memory", "value": 22 }]'
Validate the virtualMachineRole resource:
$ oc get osvmset NAME CORES RAM DESIRED READY STATUS REASON controller 8 22 1 1 Provisioned All requested VirtualMachines have been provisioned
- From inside the virtual machine do a graceful shutdown. Shutdown each updated virtual machine one by one.
Power on the virtual machine:
$ `virtctl start <VM>` to power on the virtual machine.
-
Replace
<VM>
with the name of your virtual machine.
-
Replace
16.2. Add additional disks to an OpenStackVMSet
CR
You can use the OpenStackControlPlane
CRD to add additional disks to a virtual machine by editing the additionalDisks
property.
Procedure
Add or update the
additionalDisks
parameter in theOpenStackControlPlane
object:spec: ... virtualMachineRoles: Controller: ... additionalDisks: - baseImageVolumeName: openstack-base-img dedicatedIOThread: false diskSize: 10 name: "data-disk1" storageAccessMode: ReadWriteMany storageClass: host-nfs-storageclass storageVolumeMode: Filesystem
Apply the patch:
$ oc patch -n openstack osctlplane overcloud --patch-file controller_add_data_disk1.yaml
Validate the virtualMachineRole resource:
$ oc get osvmset controller -o json | jq .spec.additionalDisks [ { "baseImageVolumeName": "openstack-base-img", "dedicatedIOThread": false, "diskSize": 10, "name": "data-disk1", "storageAccessMode": "ReadWriteMany", "storageClass": "host-nfs-storageclass", "storageVolumeMode": "Filesystem" } ]
- From inside the virtual machine do a graceful shutdown. Shutdown each updated virtual machine one by one.
Power on the virtual machine:
$ `virtctl start <VM>` to power on the virtual machine.
-
Replace
<VM>
with the name of your virtual machine.
-
Replace
Chapter 17. Airgapped environment
An air-gapped environment ensures security by physically isolating it from other networks and systems. You can install director Operator in an air-gapped environment to ensure security and provides certain regulatory requirements.
17.1. Prerequisites
An operational Red Hat Openshift Container Platform (RHOCP) cluster, version 4.12, 4.14, or 4.16. The cluster must contain a
provisioning
network, and the following Operators:-
A
baremetal
cluster Operator. Thebaremetal
cluster Operator must be enabled. For more information onbaremetal
cluster Operators, see Bare-metal cluster Operators. - OpenShift Virtualization Operator. For more information on installing the OpenShift Virtualization Operator, see Installing OpenShift Virtualization using the web console.
- SR-IOV Network Operator.
-
A
- You have a disconnected registry adhering to docker v2 schema. For more information, see Mirroring images for a disconnected installation.
- You have access to a Satellite server or any other repository used to register the overcloud nodes and install packages.
- You have access to a local git repository to store deployment artifacts.
The following command line tools are installed on your workstation:
-
podman
-
skopeo
-
oc
-
jq
-
17.2. Configuring an airgapped environment
To configure an airgapped environment, you must have access to both registry.redhat.io
and the registry for airgapped environment. For more information on how to access both registries, see Mirroring catalog contents to airgapped registries.
Procedure
Create the
openstack
namespace:$ oc new-project openstack
Create the index image and push it to your registry:
$ podman login registry.redhat.io $ podman login your.registry.local $ BUNDLE_IMG="registry.redhat.io/rhosp-rhel9/osp-director-operator-bundle@sha256:<bundle digest>" $ INDEX_IMG="quay.io/<account>/osp-director-operator-index:x.y.z-a" $ opm index add --bundles ${BUNDLE_IMG} --tag ${INDEX_IMG} -u podman --pull-tool podman
NoteYou can get the latest bundle image from: Certified container images. Search for
osp-director-operator-bundle
.Retrieve the digest of the index image you created in the previous step:
$ INDEX_DIGEST="$(skopeo inspect docker://quay.io/<account>/osp-director-operator-index:x.y.z-a | jq '.Digest' -r)"
Mirror the relevant images based on the operator index image:
$ oc adm catalog mirror quay.io/<account>/osp-director-operator-index@${INDEX_DIGEST} your.registry.local --insecure --index-filter-by-os='Linux/x86_64'
After mirroring is complete, a
manifests
directory is generated in your current directory calledmanifests-osp-director-operator-index-<random_number>
. Apply the created ImageContentSourcePolicy to your cluster:$ os apply -f manifests-osp-director-operator-index-<random_number>/imageContentSourcePolicy.yaml
-
Replace
<random_number>
with the randomly generated number.
-
Replace
Create a file named
osp-director-operator.yaml
and include the following YAML content to configure the three resources required to install director Operator:apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: osp-director-operator-index namespace: openstack spec: sourceType: grpc image: your.registry.local/osp-director-operator-index:1.3.x-y --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: "osp-director-operator-group" namespace: openstack spec: targetNamespaces: - openstack --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: osp-director-operator-subscription namespace: openstack spec: config: env: - name: WATCH_NAMESPACE value: openstack,openshift-machine-api,openshift-sriov-network-operator source: osp-director-operator-index sourceNamespace: openstack name: osp-director-operator
Create the new resources in the
openstack
namespace:$ oc apply -f osp-director-operator.yaml
Copy the required overcloud images to the repository:
$ for i in $(podman search --limit 1000 "registry.redhat.io/rhosp-rhel9/openstack" --format="{{ .Name }}" | awk '{print $1 ":" "17.1.0"}' | awk -F "/" '{print $2 "/" $3}'); do skopeo copy --all docker://registry.redhat.io/$i docker://your.registry.local/$i;done
NoteYou can refer to Preparing a Satellite server for container images if Red Hat Satellite is used as the local registry.
- You can now proceed with Installing and preparing director Operator.
Verification
Confirm that you have successfully installed director Operator:
$ oc get operators NAME AGE osp-director-operator.openstack 5m
Chapter 18. Upgrading an overcloud on a Red Hat OpenShift Container Platform cluster with director Operator (16.2 to 17.1)
You can upgrade your Red Hat OpenStack Platform (RHOSP) 16.2 overcloud to a RHOSP 17.1 overcloud with director Operator (OSPdO) by using the in-place framework for upgrades (FFU) workflow.
To perform an upgrade, you must perform the following tasks:
- Prepare your environment for the upgrade.
-
Update custom
roles_data
files to the composable services supported by RHOSP 17.1. -
Optional: Upgrade Red Hat Ceph Storage and adopt
cephadm
. - Upgrade the overcloud nodes to run RHOSP 17.1 containers on RHEL 8.
- Upgrade the overcloud nodes to run RHOSP 17.1 containers on RHEL 9.
- Perform post-upgrade tasks.
18.1. Prerequisites
- You are using the latest version of OSPdO.
- The overcloud nodes are up-to-date with the latest RHOSP 16.2 version. For information about how to perform a minor update, see Performing a minor update of the RHOSP overcloud with director Operator.
- The overcloud nodes are running the latest RHEL 8 kernel.
18.2. Updating director Operator
You must update your director Operator (OSPdO) to the latest 17.1 version before performing the overcloud upgrade. To update OSPdO, you must first delete and reinstall the current OSPdO. To delete OSPdO, you delete the OSPdO subscription and CSV.
Procedure
Check the current version of the director Operator in the
currentCSV
field:$ oc get subscription osp-director-operator.openstack -n openstack -o yaml | grep currentCSV
Delete the CSV for the director Operator in the target namespace:
$ oc delete clusterserviceversion <current_CSV> -n openstack
-
Replace
<current_CSV>
with thecurrentCSV
value from step 1.
-
Replace
Delete the subscription:
$ oc delete subscription osp-director-operator.openstack -n openstack
- Install the latest 17.1 director Operator. For information, see Installing director Operator.
18.3. Preparing your director Operator environment for upgrade
You must prepare your director Operator (OSPdO) deployed Red Hat OpenStack Platform (RHOSP) environment for the upgrade to RHOSP 17.1.
Procedure
Set
openStackRelease
to 17.1 on theopenstackcontrolplane
CR:$ oc patch openstackcontrolplane -n openstack overcloud --type=json -p="[{'op': 'replace', 'path': '/spec/openStackRelease', 'value': '17.1'}]"
Retrieve the OSPdO
ClusterServiceVersion
(csv
) CR:$ oc get csv -n openstack
Delete all instances of the
OpenStackConfigGenerator
CR:$ oc delete -n openstack openstackconfiggenerator --all
If your deployment includes HCI, the adoption from
ceph-ansible
tocephadm
must be performed using the RHOSP 17.1 on RHEL8openstackclient
image:$ oc patch openstackclient -n openstack openstackclient --type=json -p="[{'op': 'replace', 'path': '/spec/imageURL', 'value': 'registry.redhat.io/rhosp-rhel8/openstack-tripleoclient:17.1'}]"
If your deployment does not include HCI, or the
cephadm
adoption has already been completed, then switch to the 17.1 OSPdO defaultopenstackclient
image by removing the currentimageURL
from theopenstackclient
CR:$ oc patch openstackclient -n openstack openstackclient --type=json -p="[{'op': 'remove', 'path': '/spec/imageURL'}]"
If you have enabled fencing in the overcloud, you must temporarily disable fencing on one of the Controller nodes for the duration of the upgrade:
$ oc rsh -n openstack openstackclient $ ssh controller-0.ctlplane "sudo pcs property set stonith-enabled=false"
18.4. Updating composable services in custom roles_data
files
You must update your roles_data
files to the supported Red Hat OpenStack Platform (RHOSP) 17.1 composable services. For more information, see Updating composable services in custom roles_data files in the Framework for Upgrades (16.2 to 17.1) guide.
Procedure
Remove the following services from all roles:
`OS::TripleO::Services::CinderBackendDellEMCXTREMIOISCSI` `OS::TripleO::Services::CinderBackendDellPs` `OS::TripleO::Services::CinderBackendVRTSHyperScale` `OS::TripleO::Services::Ec2Api` `OS::TripleO::Services::Fluentd` `OS::TripleO::Services::FluentdAlt` `OS::TripleO::Services::Keepalived` `OS::TripleO::Services::MistralApi` `OS::TripleO::Services::MistralEngine` `OS::TripleO::Services::MistralEventEngine` `OS::TripleO::Services::MistralExecutor` `OS::TripleO::Services::NeutronLbaasv2Agent` `OS::TripleO::Services::NeutronLbaasv2Api` `OS::TripleO::Services::NeutronML2FujitsuCfab` `OS::TripleO::Services::NeutronML2FujitsuFossw` `OS::TripleO::Services::NeutronSriovHostConfig` `OS::TripleO::Services::NovaConsoleauth` `OS::TripleO::Services::Ntp` `OS::TripleO::Services::OpenDaylightApi` `OS::TripleO::Services::OpenDaylightOvs` `OS::TripleO::Services::OpenShift::GlusterFS` `OS::TripleO::Services::OpenShift::Infra` `OS::TripleO::Services::OpenShift::Master` `OS::TripleO::Services::OpenShift::Worker` `OS::TripleO::Services::PankoApi` `OS::TripleO::Services::Rear` `OS::TripleO::Services::SaharaApi` `OS::TripleO::Services::SaharaEngine` `OS::TripleO::Services::SensuClient` `OS::TripleO::Services::SensuClientAlt` `OS::TripleO::Services::SkydiveAgent` `OS::TripleO::Services::SkydiveAnalyzer` `OS::TripleO::Services::Tacker` `OS::TripleO::Services::TripleoUI` `OS::TripleO::Services::UndercloudMinionMessaging` `OS::TripleO::Services::UndercloudUpgradeEphemeralHeat` `OS::TripleO::Services::Zaqar`
-
Add the
OS::TripleO::Services::GlanceApiInternal
service to your Controller role. -
Update the
OS::TripleO::Services::NovaLibvirt
service on the Compute roles toOS::TripleO::Services::NovaLibvirtLegacy
. -
If your environment includes Red Hat Ceph Storage, set the
DeployedCeph
parameter tofalse
to enable director-managedcephadm
deployments. If your network configuration templates include the following functions, you must manually convert your NIC templates to Jinja2 Ansible format before you upgrade the overcloud. The following functions are not supported with automatic conversion:
'get_file' 'get_resource' 'digest' 'repeat' 'resource_facade' 'str_replace' 'str_replace_strict' 'str_split' 'map_merge' 'map_replace' 'yaql' 'equals' 'if' 'not' 'and' 'or' 'filter' 'make_url' 'contains'
For more information about manually converting your NIC templates, see Manually converting NIC templates to Jinja2 Ansible format in Installing and managing Red Hat OpenStack Platform with director.
18.5. Upgrading Red Hat Ceph Storage and adopting cephadm
If your environment includes Red Hat Ceph Storage deployments, you must upgrade your deployment to Red Hat Ceph Storage 5. With an upgrade to version 5, cephadm
now manages Red Hat Ceph Storage instead of ceph-ansible
.
Procedure
-
Create an Ansible playbook file named
ceph-admin-user-playbook.yaml
to create aceph-admin
user on the overcloud nodes. Add the following configuration to the
ceph-admin-user-playbook.yaml
file:- hosts: localhost gather_facts: false tasks: - name: set ssh key path facts set_fact: private_key: "{{ lookup('env', 'HOME') }}/.ssh/{{ tripleo_admin_user }}-id_rsa" public_key: "{{ lookup('env', 'HOME') }}/.ssh/{{ tripleo_admin_user }}-id_rsa.pub" run_once: true - name: stat private key stat: path: "{{ private_key }}" register: private_key_stat - name: create private key if it does not exist shell: "ssh-keygen -t rsa -q -N '' -f {{ private_key }}" no_log: true when: - not private_key_stat.stat.exists - name: stat public key stat: path: "{{ public_key }}" register: public_key_stat - name: create public key if it does not exist shell: "ssh-keygen -y -f {{ private_key }} > {{ public_key }}" when: - not public_key_stat.stat.exists - hosts: overcloud gather_facts: false become: true pre_tasks: - name: Get local private key slurp: src: "{{ hostvars['localhost']['private_key'] }}" register: private_key_get delegate_to: localhost no_log: true - name: Get local public key slurp: src: "{{ hostvars['localhost']['public_key'] }}" register: public_key_get delegate_to: localhost roles: - role: tripleo_create_admin tripleo_admin_user: "{{ tripleo_admin_user }}" tripleo_admin_pubkey: "{{ public_key_get['content'] | b64decode }}" tripleo_admin_prikey: "{{ private_key_get['content'] | b64decode }}" no_log: true
Copy the playbook to the
openstackclient
container:$ oc cp -n openstack ceph-admin-user-playbook.yml openstackclient:/home/cloud-admin/ceph-admin-user-playbook.yml
Run the playbook on the
openstackclient
container:$ oc rsh -n openstack openstackclient $ ansible-playbook -i /home/cloud-admin/ctlplane-ansible-inventory -e tripleo_admin_user=ceph-admin -e distribute_private_key=true /home/cloud-admin/ceph-admin-user-playbook.yml
Update the Red Hat Ceph Storage container image parameters in the
containers-prepare-parameter.yaml
file for the version of Red Hat Ceph Storage that your deployment uses:ceph_namespace: registry.redhat.io/rhceph ceph_image: <ceph_image_file> ceph_tag: latest ceph_grafana_image: <grafana_image_file> ceph_grafana_namespace: registry.redhat.io/rhceph ceph_grafana_tag: latest
Replace
<ceph_image_file>
with the name of the image file for the version of Red Hat Ceph Storage that your deployment uses:-
Red Hat Ceph Storage 5:
rhceph-5-rhel8
-
Red Hat Ceph Storage 5:
Replace
<grafana_image_file>
with the name of the image file for the version of Red Hat Ceph Storage that your deployment uses:-
Red Hat Ceph Storage 5:
rhceph-5-dashboard-rhel8
-
Red Hat Ceph Storage 5:
-
If your deployment includes HCI, update the
CephAnsibleRepo
parameter incompute-hci.yaml
to "rhelosp-ceph-5-tools". Create an environment file named
upgrade.yaml
and add the following configuration to it:parameter_defaults: UpgradeInitCommand: | sudo subscription-manager repos --disable * if $( grep -q 9.2 /etc/os-release ) then sudo subscription-manager repos --enable=rhel-9-for-x86_64-baseos-eus-rpms --enable=rhel-9-for-x86_64-appstream-eus-rpms --enable=rhel-9-for-x86_64-highavailability-eus-rpms --enable=openstack-17.1-for-rhel-9-x86_64-rpms --enable=fast-datapath-for-rhel-9-x86_64-rpms sudo podman ps | grep -q ceph && subscription-manager repos --enable=rhceph-5-tools-for-rhel-9-x86_64-rpms sudo subscription-manager release --set=9.2 else sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-tus-rpms --enable=rhel-8-for-x86_64-appstream-tus-rpms --enable=rhel-8-for-x86_64-highavailability-tus-rpms --enable=openstack-17.1-for-rhel-8-x86_64-rpms --enable=fast-datapath-for-rhel-8-x86_64-rpms sudo podman ps | grep -q ceph && subscription-manager repos --enable=rhceph-5-tools-for-rhel-8-x86_64-rpms sudo subscription-manager release --set=8.4 fi sudo dnf -y install cephadm
-
Create a new
OpenStackConfigGenerator
CR namedceph-upgrade
that includes the updated environment file and tripleo-tarball ConfigMaps. Create a file named
openstack-ceph-upgrade.yaml
on your workstation to define anOpenStackDeploy
CR for the upgrade from Red Hat Ceph Storage 4 to 5:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: ceph-upgrade spec: configVersion: <config_version> configGenerator: ceph-upgrade mode: externalUpgrade advancedSettings: skipTags: - ceph_health - opendev-validation - ceph_ansible_remote_tmp tags: - ceph - facts
-
Save the
openstack-ceph-upgrade.yaml
file. Create the
OpenStackDeploy
resource:$ oc create -f openstack-ceph-upgrade.yaml -n openstack
- Wait for the deployment to finish.
Create a file named
openstack-ceph-upgrade-packages.yaml
on your workstation to define anOpenStackDeploy
CR that upgrades the Red Hat Ceph Storage packages:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: ceph-upgrade-packages spec: configVersion: <config_version> configGenerator: ceph-upgrade mode: upgrade advancedSettings: limit: ceph_osd,ceph_mon,Undercloud playbook: - upgrade_steps_playbook.yaml skipTags: - ceph_health - opendev-validation - ceph_ansible_remote_tmp tags: - setup_packages
-
Save the
openstack-ceph-upgrade-packages.yaml
file. Create the
OpenStackDeploy
resource:$ oc create -f openstack-ceph-upgrade-packages.yaml -n openstack
- Wait for the deployment to finish.
Create a file named
openstack-ceph-upgrade-to-cephadm.yaml
on your workstation to define anOpenStackDeploy
CR that runs thecephadm
adoption:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: ceph-upgrade-to-cephadm spec: configVersion: <config_version> configGenerator: ceph-upgrade mode: externalUpgrade advancedSettings: skipTags: - ceph_health - opendev-validation - ceph_ansible_remote_tmp tags: - cephadm_adopt
-
Save the
openstack-ceph-upgrade-to-cephadm.yaml
file. Create the
OpenStackDeploy
resource:$ oc create -f openstack-ceph-upgrade-to-cephadm.yaml -n openstack
- Wait for the deployment to finish.
Update the
openstackclient
image to the RHEL9 container image by removing the currentimageURL
from theopenstackclient
CR:$ oc patch openstackclient -n openstack openstackclient --type=json -p="[{'op': 'remove', 'path': '/spec/imageURL'}]"
18.6. Upgrading the overcloud to RHOSP17.1 on RHEL8
To upgrade the overcloud nodes to run RHOSP 17.1 containers on RHEL 8 you must update the container preparation file, which is the file that contains the ContainerImagePrepare
parameter. You use this file to define the rules for obtaining container images for the overcloud.
You must update your container preparation file for both RHEL 8 and RHEL 9 hosts:
- RHEL 9 hosts: All containers are based on RHEL9.
-
RHEL 8 hosts: All containers are based on RHEL9 except for
libvirt
andcollectd
. Thelibvirt
andcollectd
containers must use the same base as the host.
You must then generate a new OpenStackConfigGenerator
CR before deploying the updates.
Procedure
-
Open the container preparation file,
containers-prepare-parameter.yaml
, and check that it obtains the correct image versions. Add the
ContainerImagePrepareRhel8
parameter tocontainers-prepare-parameter.yaml
:parameter_defaults: #default container image configuration for RHEL 9 hosts ContainerImagePrepare: - push_destination: false set: &container_image_prepare_rhel9_contents tag: 17.1.2 name_prefix: openstack- namespace: registry.redhat.io/rhosp-rhel9 ceph_namespace: registry.redhat.io/rhceph ceph_image: rhceph-5-rhel8 ceph_tag: latest ceph_alertmanager_image: ose-prometheus-alertmanager ceph_alertmanager_namespace: registry.redhat.io/openshift4 ceph_alertmanager_tag: v4.10 ceph_grafana_image: rhceph-5-dashboard-rhel8 ceph_grafana_namespace: registry.redhat.io/rhceph ceph_grafana_tag: latest ceph_node_exporter_image: ose-prometheus-node-exporter ceph_node_exporter_namespace: registry.redhat.io/openshift4 ceph_node_exporter_tag: v4.10 ceph_prometheus_image: ose-prometheus ceph_prometheus_namespace: registry.redhat.io/openshift4 ceph_prometheus_tag: v4.10 # RHEL8 hosts pin the collectd and libvirt containers to rhosp-rhel8 # To apply the following configuration, reference the followingparameter # in the role specific parameters below: <Role>ContainerImagePrepare ContainerImagePrepareRhel8: &container_image_prepare_rhel8 - push_destination: false set: *container_image_prepare_rhel9_contents excludes: - collectd - nova-libvirt - push_destination: false set: tag: 17.1.2 name_prefix: openstack- namespace: registry.redhat.io/rhosp-rhel8 ceph_namespace: registry.redhat.io/rhceph ceph_image: rhceph-5-rhel8 ceph_tag: latest ceph_alertmanager_image: ose-prometheus-alertmanager ceph_alertmanager_namespace: registry.redhat.io/openshift4 ceph_alertmanager_tag: v4.10 ceph_grafana_image: rhceph-5-dashboard-rhel8 ceph_grafana_namespace: registry.redhat.io/rhceph ceph_grafana_tag: latest ceph_node_exporter_image: ose-prometheus-node-exporter ceph_node_exporter_namespace: registry.redhat.io/openshift4 ceph_node_exporter_tag: v4.10 ceph_prometheus_image: ose-prometheus ceph_prometheus_namespace: registry.redhat.io/openshift4 ceph_prometheus_tag: v4.10 includes: - collectd - nova-libvirt # Initially all hosts are RHEL 8 so set the role specific container # image prepare parameter to the RHEL 8 configuration ControllerContainerImagePrepare: *container_image_prepare_rhel8 ComputeContainerImagePrepare: *container_image_prepare_rhel8 ...
-
Create an environment file named
upgrade.yaml
. Add the following configuration to the
upgrade.yaml
file:parameter_defaults: UpgradeInitCommand: | sudo subscription-manager repos --disable * if $( grep -q 9.2 /etc/os-release ) then sudo subscription-manager repos --enable=rhel-9.2-for-x86_64-baseos-eus-rpms --enable=rhel-9.2-for-x86_64-appstream-eus-rpms --enable=rhel-9.2-for-x86_64-highavailability-eus-rpms --enable=openstack-17.1-for-rhel-9-x86_64-rpms --enable=fast-datapath-for-rhel-9-x86_64-rpms else sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-eus-rpms --enable=rhel-8-for-x86_64-appstream-eus-rpms --enable=rhel-8-for-x86_64-highavailability-eus-rpms --enable=openstack-17.1-for-rhel-8-x86_64-rpms --enable=fast-datapath-for-rhel-8-x86_64-rpms fi
-
Create an environment file named
disable_compute_service_check.yaml
. Add the following configuration to the
disable_compute_service_check.yaml
file:parameter_defaults: ExtraConfig: nova::workarounds::disable_compute_service_check_for_ffu: true parameter_merge_strategies: ExtraConfig: merge
-
If your deployment includes HCI, update the Red Hat Ceph Storage and HCI parameters from
ceph-ansible
values in RHOSP 16.2 tocephadm
values in RHOSP 17.1. For more information, see Custom environment file for configuring Hyperconverged Infrastructure (HCI) storage in director Operator. Create a file named
openstack-configgen-upgrade.yaml
on your workstation that defines a newOpenStackConfigGenerator
CR named "upgrade":apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackConfigGenerator metadata: name: "upgrade" namespace: openstack spec: enableFencing: False gitSecret: git-secret heatEnvs: - ssl/tls-endpoints-public-dns.yaml - ssl/enable-tls.yaml - nova-hw-machine-type-upgrade.yaml - lifecycle/upgrade-prepare.yaml heatEnvConfigMap: heat-env-config-upgrade tarballConfigMap: tripleo-tarball-config-upgrade
Create a file named
openstack-upgrade.yaml
on your workstation to create anOpenStackDeploy
CR for the overcloud upgrade:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: upgrade spec: configVersion: <config_version> configGenerator: upgrade mode: upgrade
-
Save the
openstack-upgrade.yaml
file. Create the
OpenStackDeploy
resource:$ oc create -f openstack-upgrade.yaml -n openstack
- Wait for the deployment to finish. The overcloud nodes are now running 17.1 containers on RHEL8.
18.7. Upgrading the overcloud to RHEL 9
To upgrade the overcloud nodes to run RHOSP 17.1 containers on RHEL 9, you must update the container preparation file, which is the file that contains the ContainerImagePrepare
parameter. You use this file to define the rules for obtaining container images for the overcloud. You must then generate a new OpenStackConfigGenerator
CR before deploying the updates.
Procedure
-
Open the container preparation file,
containers-prepare-parameter.yaml
and check that it obtains the correct image versions. Remove the following role specific overrides from the
containers-prepare-paramater.yaml
file:ControllerContainerImagePrepare: *container_image_prepare_rhel8 ComputeContainerImagePrepare: *container_image_prepare_rhel8
-
Open the
roles_data.yaml
file and replaceOS::TripleO::Services::NovaLibvirtLegacy
withOS::TripleO::Services::NovaLibvirt
. Create an environment file named
skip_rhel_release.yaml
, and add the following configuration:parameter_defaults: SkipRhelEnforcement: false
Create an environment file named
system_upgrade.yaml
and add the following configuration:parameter_defaults: NICsPrefixesToUdev: ['en'] UpgradeLeappDevelSkip: "LEAPP_UNSUPPORTED=1 LEAPP_DEVEL_SKIP_CHECK_OS_RELEASE=1 LEAPP_NO_NETWORK_RENAMING=1 LEAPP_DEVEL_TARGET_RELEASE=9.2" UpgradeLeappDebug: false UpgradeLeappEnabled: true LeappActorsToRemove: ['checkifcfg','persistentnetnamesdisable','checkinstalledkernels','biosdevname'] LeappRepoInitCommand: | sudo subscription-manager repos --disable=* subscription-manager repos --enable rhel-8-for-x86_64-baseos-tus-rpms --enable rhel-8-for-x86_64-appstream-tus-rpms --enable openstack-17.1-for-rhel-8-x86_64-rpms subscription-manager release --set=8.4 UpgradeLeappCommandOptions: "--enablerepo=rhel-9-for-x86_64-baseos-eus-rpms --enablerepo=rhel-9-for-x86_64-appstream-eus-rpms --enablerepo=rhel-9-for-x86_64-highavailability-eus-rpms --enablerepo=openstack-17.1-for-rhel-9-x86_64-rpms --enablerepo=fast-datapath-for-rhel-9-x86_64-rpms" LeappInitCommand: | sudo subscription-manager repos --disable=* sudo subscription-manager repos --enable=rhel-9-for-x86_64-baseos-eus-rpms --enable=rhel-9-for-x86_64-appstream-eus-rpms --enable=rhel-9-for-x86_64-highavailability-eus-rpms --enable=openstack-17.1-for-rhel-9-x86_64-rpms --enable=fast-datapath-for-rhel-9-x86_64-rpms leapp answer --add --section check_vdo.confirm=True dnf -y remove irb
For more information on the recommended Leapp parameters, see Upgrade parameters in the Framework for upgrades (16.2 to 17.1) guide.
-
Create a new
OpenStackConfigGenerator
CR namedsystem-upgrade
that includes the updated heat environment and tripleo tarball ConfigMaps. Create a file named
openstack-controller0-upgrade.yaml
on your workstation to define anOpenStackDeploy
CR for the first controller node:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: system-upgrade-controller-0 spec: configVersion: <config_version> configGenerator: system-upgrade mode: upgrade advancedSettings: limit: Controller[0] tags: - system_upgrade
-
Save the
openstack-controller0-upgrade.yaml
file. Create the
OpenStackDeploy
resource to run the system upgrade on Controller 0:$ oc create -f openstack-controller0-upgrade.yaml -n openstack
- Wait for the deployment to finish.
Create a file named
openstack-controller1-upgrade.yaml
on your workstation to define anOpenStackDeploy
CR for the second controller node:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: system-upgrade-controller-1 spec: configVersion: <config_version> configGenerator: system-upgrade mode: upgrade advancedSettings: limit: Controller[1] tags: - system_upgrade
-
Save the
openstack-controller1-upgrade.yaml
file. Create the
OpenStackDeploy
resource to run the system upgrade on Controller 1:$ oc create -f openstack-controller1-upgrade.yaml -n openstack
- Wait for the deployment to finish.
Create a file named
openstack-controller2-upgrade.yaml
on your workstation to define anOpenStackDeploy
CR for the third controller node:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: system-upgrade-controller-2 spec: configVersion: <config_version> configGenerator: system-upgrade mode: upgrade advancedSettings: limit: Controller[2] tags: - system_upgrade
-
Save the
openstack-controller2-upgrade.yaml
file. Create the
OpenStackDeploy
resource to run the system upgrade on Controller 1:$ oc create -f openstack-controller2-upgrade.yaml -n openstack
- Wait for the deployment to finish.
Create a file named
openstack-computes-upgrade.yaml
on your workstation to define anOpenStackDeploy
CR that upgrades all Compute nodes:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: system-upgrade-computes spec: configVersion: <config_version> configGenerator: system-upgrade mode: upgrade advancedSettings: limit: Compute tags: - system_upgrade
-
Save the
openstack-computes-upgrade.yaml
file. Create the
OpenStackDeploy
resource to run the system upgrade on the Compute nodes:$ oc create -f openstack-computes-upgrade.yaml -n openstack
- Wait for the deployment to finish.
18.8. Performing post-upgrade tasks
You must perform some post-upgrade tasks to complete the upgrade after the overcloud upgrades are successfully complete.
Procedure
-
Update the
baseImageUrl
parameter to a RHEL 9.2 guest image in yourOpenStackProvisionServer
CR andOpenStackBaremetalSet
CR. Re-enable fencing on the controllers:
$ oc rsh -n openstack openstackclient $ ssh controller-0.ctlplane "sudo pcs property set stonith-enabled=true"
- Perform any other post-upgrade actions relevant to your environment. For more information, see Performing post-upgrade actions in the Framework for upgrades (16.2 to 17.1) guide.