Chapter 5. Creating overcloud nodes with director Operator
A Red Hat OpenStack Platform (RHOSP) overcloud consists of multiple nodes, such as Controller nodes to provide control plane services and Compute nodes to provide computing resources. For a functional overcloud with high availability, you must have 3 Controller nodes and at least one Compute node. You can create Controller nodes with the OpenStackControlPlane
Custom Resource Definition (CRD) and Compute nodes with the OpenStackBaremetalSet
CRD.
Red Hat OpenShift Container Platform (RHOCP) does not autodiscover issues on RHOCP worker nodes, or perform autorecovery of worker nodes that host RHOSP Controller VMs if the worker node fails or has an issue. You must enable health checks on your RHOCP cluster to automatically relocate Controller VM pods when a host worker node fails. For information on how to autodiscover issues on RHOCP worker nodes, see Deploying machine health checks.
5.1. Creating a control plane with the OpenStackControlPlane CRD
The Red Hat OpenStack Platform (RHOSP) control plane contains the RHOSP services that manage the overcloud. The default control plane consists of 3 Controller nodes. You can use composable roles to manage services on dedicated controller virtual machines (VMs). For more information on composable roles, see Composable services and custom roles.
Define an OpenStackControlPlane
custom resource (CR) to create the Controller nodes as OpenShift Virtualization virtual machines (VMs).
Use the following commands to view the OpenStackControlPlane
CRD definition and specification schema:
$ oc describe crd openstackcontrolplane $ oc explain openstackcontrolplane.spec
Prerequisites
-
You have used the
OpenStackNetConfig
CR to create a control plane network and any additional isolated networks.
Procedure
Create a file named
openstack-controller.yaml
on your workstation. Include the resource specification for the Controller nodes. The following example defines a specification for a control plane that consists of 3 Controller nodes:apiVersion: osp-director.openstack.org/v1beta2 kind: OpenStackControlPlane metadata: name: overcloud 1 namespace: openstack 2 spec: 3 openStackClientNetworks: - ctlplane - internal_api - external openStackClientStorageClass: host-nfs-storageclass passwordSecret: userpassword 4 virtualMachineRoles: Controller: roleName: Controller roleCount: 3 networks: - ctlplane - internal_api - external - tenant - storage - storage_mgmt cores: 12 memory: 64 rootDisk: diskSize: 500 baseImageVolumeName: openstack-base-img 5 storageClass: host-nfs-storageclass 6 storageAccessMode: ReadWriteMany storageVolumeMode: Filesystem # optional configure additional discs to be attached to the VMs, # need to be configured manually inside the VMs where to be used. additionalDisks: - name: datadisk diskSize: 500 storageClass: host-nfs-storageclass storageAccessMode: ReadWriteMany storageVolumeMode: Filesystem openStackRelease: "17.1"
- 1
- The name of the overcloud control plane, for example,
overcloud
. - 2
- The OSPdO namespace, for example,
openstack
. - 3
- The configuration for the control plane.
- 4
- Optional: The
Secret
resource that provides root access on each node to users with the password. - 5
- The name of the data volume that stores the base operating system image for your Controller VMs. For more information on creating the data volume, see Creating a data volume for the base operating system.
- 6
- For information on configuring Red Hat OpenShift Container Platform (RHOCP) storage, see Dynamic provisioning.
-
Save the
openstack-controller.yaml
file. Create the control plane:
$ oc create -f openstack-controller.yaml -n openstack
-
Wait until RHOCP creates the resources related to
OpenStackControlPlane
CR. OSPdO also creates anOpenStackClient
pod that you can access through a remote shell to run RHOSP commands.
Verification
View the resource for the control plane:
$ oc get openstackcontrolplane/overcloud -n openstack
View the
OpenStackVMSet
resources to verify the creation of the control plane VM set:$ oc get openstackvmsets -n openstack
View the VMs to verify the creation of the control plane OpenShift Virtualization VMs:
$ oc get virtualmachines -n openstack
Test access to the
openstackclient
remote shell:$ oc rsh -n openstack openstackclient
5.2. Creating Compute nodes with the OpenStackBaremetalSet
CRD
Compute nodes provide computing resources to your Red Hat OpenStack Platform (RHOSP) environment. You must have at least one Compute node in your overcloud and you can scale the number of Compute nodes after deployment.
Define an OpenStackBaremetalSet
custom resource (CR) to create Compute nodes from bare-metal machines that the Red Hat OpenShift Container Platform (RHOCP) manages.
Use the following commands to view the OpenStackBareMetalSet
CRD definition and specification schema:
$ oc describe crd openstackbaremetalset $ oc explain openstackbaremetalset.spec
Prerequisites
-
You have used the
OpenStackNetConfig
CR to create a control plane network and any additional isolated networks. -
You have created a control plane with the
OpenStackControlPlane
CRD. -
You have created a
BareMetalHost
CR for each bare-metal node that you want to add as a Compute node to the overcloud. For information about how to create aBareMetalHost
CR, see About the BareMetalHost resource in the Red Hat OpenShift Container Platform (RHOCP) Postinstallation configuration guide.
Procedure
Create a file named
openstack-compute.yaml
on your workstation. Include the resource specification for the Compute nodes. The following example defines a specification for 1 Compute node:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackBaremetalSet metadata: name: compute 1 namespace: openstack 2 spec: 3 count: 1 baseImageUrl: http://<source_host>/rhel-9.2-x86_64-kvm.qcow2 deploymentSSHSecret: osp-controlplane-ssh-keys # If you manually created an OpenStackProvisionServer, you can use it here, # otherwise director Operator will create one for you (with `baseImageUrl` as the image that it server) # to use with this OpenStackBaremetalSet # provisionServerName: openstack-provision-server ctlplaneInterface: enp2s0 networks: - ctlplane - internal_api - tenant - storage roleName: Compute passwordSecret: userpassword 4
-
Save the
openstack-compute.yaml
file. Create the Compute nodes:
$ oc create -f openstack-compute.yaml -n openstack
Verification
View the resource for the Compute nodes:
$ oc get openstackbaremetalset/compute -n openstack
View the bare-metal machines that RHOCP manages to verify the creation of the Compute nodes:
$ oc get baremetalhosts -n openshift-machine-api
5.3. Creating a provisioning server with the OpenStackProvisionServer
CRD
Provisioning servers provide a specific Red Hat Enterprise Linux (RHEL) QCOW2 image for provisioning Compute nodes for the Red Hat OpenStack Platform (RHOSP). An OpenStackProvisionServer
CR is automatically created for any OpenStackBaremetalSet
CRs you create. You can create the OpenStackProvisionServer
CR manually and provide the name to any OpenStackBaremetalSet
CRs that you create.
The OpenStackProvisionServer
CRD creates an Apache server on the Red Hat OpenShift Container Platform (RHOCP) provisioning network for a specific RHEL QCOW2 image.
Procedure
Create a file named
openstack-provision.yaml
on your workstation. Include the resource specification for the Provisioning server. The following example defines a specification for a Provisioning server using a specific RHEL 9.2 QCOW2 images:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackProvisionServer metadata: name: openstack-provision-server 1 namespace: openstack 2 spec: baseImageUrl: http://<source_host>/rhel-9.2-x86_64-kvm.qcow2 3 port: 8080 4
- 1
- The name that identifies the
OpenStackProvisionServer
CR. - 2
- The OSPdO namespace, for example,
openstack
. - 3
- The initial source of the RHEL QCOW2 image for the Provisioning server. The image is downloaded from this remote source when the server is created.
- 4
- The Provisioning server port, set to 8080 by default. You can change it for a specific port configuration.
For further descriptions of the values you can use to configure your
OpenStackProvisionServer
CR, view theOpenStackProvisionServer
CRD specification schema:$ oc describe crd openstackprovisionserver
-
Save the
openstack-provision.yaml
file. Create the Provisioning Server:
$ oc create -f openstack-provision.yaml -n openstack
Verify that the resource for the Provisioning server is created:
$ oc get openstackprovisionserver/openstack-provision-server -n openstack