Chapter 3. Customizing the control plane
To customize the Red Hat OpenStack Services on OpenShift (RHOSO) control plane for your environment, you can add, remove, or configure the RHOSO services that run on the control plane according to your requirements.
3.1. Prerequisites Copy linkLink copied to clipboard!
- The RHOSO environment is deployed on a RHOCP cluster. For more information, see Deploying Red Hat OpenStack Services on OpenShift.
-
You are logged on to a workstation that has access to the RHOCP cluster, as a user with
cluster-adminprivileges.
3.2. Enabling disabled services Copy linkLink copied to clipboard!
If you enable a service that is disabled by setting enabled: true, you must either create an empty template for the service by adding template: {} to the service definition to ensure that the default values for the service are set, or specify some or all of the template parameter values. For example, to enable the Dashboard service (horizon) with the default service values, add the following configuration to your OpenStackControlPlane custom resource (CR):
spec:
...
horizon:
apiOverride: {}
enabled: true
template: {}
If you want to set the values for specific service parameters, then add the following configuration to your OpenStackControlPlane custom resource (CR):
spec:
...
horizon:
apiOverride: {}
enabled: true
template:
customServiceConfig: ""
memcachedInstance: memcached
override: {}
preserveJobs: false
replicas: 2
resources: {}
secret: osp-secret
tls: {}
Any parameters that you do not specify are set to the default value from the service template.
3.3. Configuring authentication for the memcached service Copy linkLink copied to clipboard!
You can configure the cache maintained by the memcached service to require authentication. Authentication increases the security of your cloud by restricting access to the cached data of your cloud. By default, no authentication is required.
Prerequisites
-
You have the
occommand line tool installed on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-adminprivileges.
You can set the authentication mode for the memcached service on your control plane. The following authentication modes are supported:
None- No authentication is required. The client does not require a certificate from the server to proceed with the connection. This mode has the least impact upon performance but does not provide any security. This is the default authentication mode.
Request- Authentication is attempted. The client requests a certificate from the server but does not strictly require it to proceed with the connection. This mode provides flexible connections but weaker security.
Require- Authentication is enforced. The client demands that the server present a valid and verifiable certificate. If the server fails to provide one, or if the client cannot successfully validate the certificate against its trusted Certificate Authorities (CAs), the connection fails. This mode provides strong security at the cost of brittle connections.
Procedure
-
Open your
OpenStackControlPlanecustom resource (CR) file,openstack_control_plane.yaml, on your workstation. Update the memcached service with the required authentication mode:
The following example configures the default memcached cluster (
memcached):spec: memcached: enabled: true templates: memcached: ... tls: mtls: sslVerifyMode: <authentication_mode>-
Replace
<authentication_mode>with the required authentication mode, eitherRequestorRequire.
-
Replace
-
Save
openstack_control_plane.yaml. Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstackThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the -w option to the end of the
getcommand to track deployment progress.
3.4. Controlling service pod placement with Topology CRs Copy linkLink copied to clipboard!
By default, the OpenStack Operator deploys Red Hat OpenStack Services on OpenShift (RHOSO) services on any worker node. You can control the placement of each RHOSO service pod to optimize the performance of your deployment by creating Topology custom resources (CRs).
You can apply a Topology CR at the top level of the OpenStackControlPlane CR to specify the default pod spread policy for the control plane. You can also override the default spread policy in the specification of each service in the OpenStackControlPlane CR.
Procedure
Create a file on your workstation that defines a
TopologyCR that spreads the service pods across worker nodes, for example,default_ctlplane_topology.yaml:apiVersion: topology.openstack.org/v1beta1 kind: Topology metadata: name: default-ctlplane-topology namespace: openstack spec: topologySpreadConstraints: - maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: ScheduleAnyway matchLabelKeys: - pod-template-hash - controller-revision-hash-
metadata.name: The name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), start and end with an alphanumeric character. topologySpreadConstraints.whenUnsatisfiable: Specifies how the scheduler handles a pod if it does not satisfy the spread constraint:-
DoNotSchedule: Instructs the scheduler not to schedule the pod. This is the default behavior. To ensure the deployment has high availability (HA), set the HA servicesrabbitmqandgaleratoDoNotSchedule. -
ScheduleAnyway: Instructs the scheduler to schedule the pod in any location, but to give higher precedence to topologies that minimize the skew. If you set HA services toScheduleAnyway, then when the spread constraint cannot be satisfied, the pod is placed on a different host worker node. You must then move the pod manually to the correct host once the host is operational. For more information about how to manually move pods, see Controlling pod placement onto nodes (scheduling) in RHOCP Nodes.
-
-
topologySpreadConstraints.matchLabelKeys: An optional field that specifies the label keys to use to group the pods that the affinity rules are applied to. Use this field to ensure that the affinity rules are applied only to pods from the samestatefulsetordeploymentresource when scheduling. ThematchLabelKeysfield enables the resource to be updated with new pods and the spread constraint rules to be applied to only the new set of pods.
-
Create a file on your workstation that defines a
TopologyCR that enforces strict spread constraints for HA service pods, for example,ha_ctlplane_topology.yaml:apiVersion: topology.openstack.org/v1beta1 kind: Topology metadata: name: ha-ctlplane-topology namespace: openstack spec: topologySpreadConstraints: - maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: DoNotSchedule matchLabelKeys: - pod-template-hash - controller-revision-hashCreate the
TopologyCRs:$ oc create -f default_ctlplane_topology.yaml $ oc create -f ha_ctlplane_topology.yaml-
Open your
OpenStackControlPlaneCR file on your workstation. Specify that the service pods, when created, are spread across the worker nodes in your control plane:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane namespace: openstack spec: topologyRef: name: default-ctlplane-topologyUpdate the specifications for the
rabbitmqandgaleraservices to ensure that the HA service pods, when created, are only placed on a worker node when the spread constraint can be satisfied:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane namespace: openstack spec: topologyRef: name: default-ctlplane-topology ... galera: templates: openstack: topologyRef: name: ha-ctlplane-topology openstack-cell1: topologyRef: name: ha-ctlplane-topology ... rabbitmq: templates: rabbitmq: topologyRef: name: ha-ctlplane-topology ...Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup startedThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.Verify that the service pods are running on the correct worker nodes.
Example
$ oc -n openstack get pods | grep -iE "(rabbitmq|galera)" openstack-galera-0 1/1 Running 0 24m 192.172.28.33 worker-0 openstack-galera-1 1/1 Running 0 24m 192.172.16.63 worker-1 openstack-galera-2 1/1 Running 0 24m 192.172.12.82 worker-2 rabbitmq-server-0 1/1 Running 0 24m 192.168.24.95 worker-2 rabbitmq-server-1 1/1 Running 0 24m 192.168.16.84 worker-0 rabbitmq-server-2 1/1 Running 0 24m 192.168.20.137 worker-1
Additional resources
3.5. Adding Compute cells to the control plane Copy linkLink copied to clipboard!
You can add Compute cells to your Red Hat OpenStack Services on OpenShift (RHOSO) environment to manage Compute nodes in groups and improve the performance and scalability of large deployments. Each cell has a dedicated message queue, runs standalone copies of the cell-specific Compute services and databases, and stores instance metadata in a database dedicated to instances in that cell.
By default, the control plane creates two cells:
-
cell0: The controller cell that manages global components and services, such as the Compute scheduler and the global conductor. This cell also contains a dedicated database to store information about instances that failed to be scheduled to a Compute node. You cannot connect Compute nodes to this cell. -
cell1: The default cell that Compute nodes are connected to when you don’t create and configure additional cells.
You can add cells to your Red Hat OpenStack Services on OpenShift (RHOSO) environment when you create your control plane or at any time afterwards. The following procedure adds one additional cell, cell2, and configures each cell with a dedicated nova metadata API service. Creating a dedicated nova metadata API service for each cell improves the performance of large deployments and the scalability of your environment. Alternatively, you can deploy one nova metadata API service on the top level that serves all the cells.
Procedure
-
Open your
OpenStackControlPlanecustom resource (CR) file,openstack_control_plane.yaml, on your workstation. Create a database server for each new cell that you want to add to your RHOSO environment:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane namespace: openstack spec: secret: osp-secret ... galera: enabled: true templates: openstack: storageRequest: 5G secret: cell0-secret replicas: 1 openstack-cell1: storageRequest: 5G secret: cell1-secret replicas: 1 openstack-cell2: storageRequest: 5G secret: cell2-secret replicas: 1-
templates.openstack: The database used by most of the RHOSO services, including the Compute servicesnova-apiandnova-scheduler, andcell0. -
templates.openstack-cell1: The database to be used bycell1. -
templates.openstack-cell2: The database to be used bycell2.
-
Create a message bus with unique IPs for the load balancer for each new cell that you want to add to your RHOSO environment:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane spec: secret: osp-secret ... rabbitmq: templates: rabbitmq: override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.85 spec: type: LoadBalancer rabbitmq-cell1: override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.86 spec: type: LoadBalancer rabbitmq-cell2: override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.87 spec: type: LoadBalancer-
rabbitmq.rabbitmq: The message bus used by most of the RHOSO services, including the Compute servicesnova-apiandnova-scheduler, andcell0. -
rabbitmq.rabbitmq-cell1: The message bus to be used bycell1. -
rabbitmq.rabbitmq-cell2: The message bus to be used bycell2.
-
Optional: Override the default VNC proxy service route hostname with your custom API public endpoint:
nova: apiOverride: route: {} cellOverride: cell1: noVNCProxy: route: spec: host: myvncproxy.domain.nameNoteThe hostname must be resolved by the DNS service in your data center to which the RHOCP cluster and the DNS instance forwards their requests. You cannot use the internal RHOCP
coredns.Add the new cells to the
cellTemplatesconfiguration in thenovaservice configuration:nova: ... template: ... metadataServiceTemplate: enabled: false secret: osp-secret apiDatabaseAccount: nova-api cellTemplates: cell0: hasAPIAccess: true cellDatabaseAccount: nova-cell0 cellDatabaseInstance: openstack cellMessageBusInstance: rabbitmq conductorServiceTemplate: replicas: 1 cell1: hasAPIAccess: true cellDatabaseAccount: nova-cell1 cellDatabaseInstance: openstack-cell1 cellMessageBusInstance: rabbitmq-cell1 conductorServiceTemplate: replicas: 1 metadataServiceTemplate: enabled: true replicas: 1 cell2: hasAPIAccess: true cellDatabaseAccount: nova-cell2 cellDatabaseInstance: openstack-cell2 cellMessageBusInstance: rabbitmq-cell2 conductorServiceTemplate: replicas: 1 metadataServiceTemplate: enabled: true replicas: 1-
template.metadataServiceTemplate.enabled: Disables the singlenovametadata API service that serves all the cells. If you want to have just onenovametadata API service that serves all the cells, then set this field totrueand remove configuration for themetadataservice from each cell. template.cellTemplates.cell2: The name of the new Compute cell. The name has a limit of 20 characters, and must contain only lowercase alphanumeric characters and the-symbol. For more information about the properties you can configure for a cell, view the definition for theNovaCRD:$ oc describe crd nova
-
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup startedThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.Optional: Confirm that the control plane is deployed by reviewing the pods in the
openstacknamespace for each of the cells you created:$ oc get pods -n openstack | grep cell2 nova-cell2-conductor-0 1/1 Running 2 5d20h nova-cell2-novncproxy-0 1/1 Running 2 5d20h openstack-cell2-galera-0 1/1 Running 2 5d20h rabbitmq-cell2-server-0 1/1 Running 2 5d20hThe control plane is deployed when all the pods are either completed or running.
Optional: Confirm that the new cells are created:
$ oc exec -it nova-cell0-conductor-0 /bin/bash # nova-manage cell_v2 list_cells +-------+--------------------------------------+----------------------------------------------------------------------------------+------------------------------------------------------------+----------+| Name | UUID | Transport URL | Database Connection | Disabled |+-------+--------------------------------------+----------------------------------------------------------------------------------+------------------------------------------------------------+----------+| cell0 | 00000000-0000-0000-0000-000000000000 | rabbit: | mysql+pymysql://nova_cell0:****@openstack/nova_cell0 | False || cell1 | c5bf5e35-6677-40aa-80d0-33a440cac14e | rabbit://default_user_CuUVnXz-PvgzXvPxypU:****@rabbitmq-cell1.openstack.svc:5672 | mysql+pymysql://nova_cell1:****@openstack-cell1/nova_cell1 | False || cell2 | c5bf5e35-6677-40aa-80d0-33a440cac14e | rabbit://default_user_CuUVnXz-PvgzXvPxypU:****@rabbitmq-cell2.openstack.svc:5672 | mysql+pymysql://nova_cell2:****@openstack-cell2/nova_cell2| False |+-------+--------------------------------------+----------------------------------------------------------------------------------+------------------------------------------------------------+----------+
3.6. Removing a Compute cell from the control plane Copy linkLink copied to clipboard!
You can remove a cell from the control plane to release control plane resources. To remove a cell, you delete the references to the cell in your OpenStackControlPlane custom resource (CR) and then delete the related secrets and CRs.
You cannot remove cell0 from the control plane.
Procedure
-
Open your
OpenStackControlPlaneCR file,openstack_control_plane.yaml, on your workstation. Remove the cell definition from the
cellTemplates. For example:spec: ... cellTemplates: cell0: cellDatabaseAccount: nova-cell0 hasAPIAccess: true ... <cellname>: ...-
Replace
<cellname>with the name of the cell you are removing and delete the line.
-
Replace
Delete the cell-specific RabbitMQ definition from the
OpenStackControlPlaneCR. For example:spec: ... rabbitmq: templates: ... rabbitmq-<cellname>: ...Delete the cell-specific Galera definition from the
OpenStackControlPlaneCR. For example:spec: ... galera: templates: ... openstack-<cellname>: ...Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until Red Hat OpenShift Cluster Platform (RHOCP) creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstackThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.-
Optional: Confirm that the control plane is deployed by reviewing the pods in the
openstacknamespace for each of the cells you created. The control plane is deployed when all the pods are either completed or running. Ensure that the cell you deleted is not present in the output.
Verification
Open a remote shell connection to the
OpenStackClientpod:$ oc rsh -n openstack openstackclientConfirm that the internal service endpoints are registered with each service:
$ openstack compute service list --------------------+----------+---------+-------+-----------------------------------------------------------------------------------------+ | ID | Binary | Host | Zone | Status | State | Updated At | +--------------------------------------+----------------+------------------------+----------+---------+-------+----------------------------+ | 792258c6-fc84-4f6c-8d8c-48c1c4873786 | nova-conductor | nova-cell0-conductor-0 | internal | enabled | up | 2025-02-10T11:04:34.000000 | | b072bd47-38f9-40c9-8be8-f1dbd0b602f6 | nova-scheduler | nova-scheduler-0 | internal | enabled | up | 2025-02-10T11:04:27.000000 | | 10f36138-90da-4ef3-8c1f-a9dfd0c4ca0c | nova-conductor | nova-cell1-conductor-0 | internal | enabled | up | 2025-02-10T11:04:28.000000 | +--------------------------------------+----------------+------------------------+----------+---------+-------+----------------------------+Exit the
OpenStackClientpod:$ exit
3.7. Configuring Compute notifications Copy linkLink copied to clipboard!
You can configure the Compute service (nova) to provide notifications to Telemetry services in your Red Hat OpenStack Services on OpenShift (RHOSO) environment. The Compute service supports designating one RabbitMQ replica as a notification server.
Prerequisites
-
You have the
occommand line tool installed on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-adminprivileges.
Procedure
-
Open your
OpenStackControlPlanecustom resource (CR) file,openstack_control_plane.yaml, on your workstation. Update the
rabbitmqservice configuration to provide Compute service notifications.The following example creates a single notifications server:
spec: rabbitmq: enabled: true templates: ... <rabbitmq_notification_server>: delayStartSeconds: 30 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: <ip_address> spec: type: LoadBalancer-
Replace
<rabbitmq_notification_server>with the name of your notification server, for example,rabbitmq-notifications. -
Replace
<ip_address>with the appropriate IP address. Adjust this value accordingly based on your networking plan and configuration.
-
Replace
Register the notification server with the Compute service:
spec: nova: template: ... apiMessageBusInstance: rabbitmq notificationsBusInstance: <rabbitmq_notification_server>-
Replace
<rabbitmq_notification_server>with the name of the notification server created in the previous step.
-
Replace
-
Save
openstack_control_plane.yaml. Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstackThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the -w option to the end of the
getcommand to track deployment progress.-
Optional: Confirm that the control plane is deployed by reviewing the pods in the
openstacknamespace for each of the cells you created. The control plane is deployed when all the pods are either completed or running. Optional: Verify the notification
transporturlis properly configured.The following is an example of performing this verification:
$ oc get transporturl NAME STATUS MESSAGE ... nova-api-transport True Setup complete nova-cell1-transport True Setup complete nova-notification-transport True Setup completeCreate a new
OpenStackDataPlaneDeploymentCR to configure notifications on the data plane nodes and deploy the data plane. Save the CR to a file namedcompute_notifications_deploy.yamlon your workstation:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: nova-notifications namespace: openstack spec: nodeSets: - openstack-edpm - ... - <nodeSet_name>Replace
<nodeSet_name>with the names of theOpenStackDataPlaneNodeSetCRs that you want to include in your data plane deployment.NoteThe following example demonstrates how to obtain information about your data plane deployment and use that information to obtain a list of nodesets.
$ oc get openstackdataplanedeployment NAME NODESETS STATUS MESSAGE edpm-deployment ["openstack-edpm"] True Setup complete $ oc get openstackdataplanedeployment edpm-deployment -o jsonpath='{.spec.nodeSets}'["openstack-edpm"]
-
Save the
compute_notifications_deploy.yamldeployment file. Deploy the data plane updates:
$ oc create -f compute_notifications_deploy.yamlVerify that the data plane is deployed:
$ oc get openstackdataplanenodeset NAME STATUS MESSAGE nova-notifications True DeployedAccess the remote shell for openstackclient and verify that the deployed Compute nodes are visible on the control plane:
$ oc rsh -n openstack openstackclient $ openstack hypervisor list
3.8. Enabling the Dashboard service (horizon) interface Copy linkLink copied to clipboard!
You can enable the Dashboard service (horizon) interface for cloud user access to the cloud through a browser.
Procedure
-
Open your
OpenStackControlPlanecustom resource (CR) file,openstack_control_plane.yaml, on your workstation. Enable the
horizonservice:spec: ... horizon: enabled: trueOptional: Override the default route hostname for the
horizonservice with your custom API public endpoint:spec: ... horizon: enabled: true apiOverride: route: spec: host: myhorizon.domain.nameNoteThe hostname must be resolved by the DNS service in your data center to which the RHOCP cluster and the DNS instance forwards their requests. You cannot use the internal RHOCP
coredns.Configure the
horizonservice:spec: ... horizon: ... template: customServiceConfig: "" memcachedInstance: memcached override: {} preserveJobs: false replicas: 2 resources: {} secret: osp-secret tls: {}-
horizon.template.replicas: Setreplicasto a minimum of2for high availability.
-
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup startedThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.Confirm that the control plane is deployed by reviewing the pods in the
openstacknamespace:$ oc get pods -n openstackThe control plane is deployed when all the pods are either completed or running.
Retrieve the Dashboard service endpoint URL:
$ oc get horizons horizon -o jsonpath='{.status.endpoint}'Use this URL to access the Horizon interface.
Verification
To log in as the
adminuser, obtain theadminpassword from theAdminPasswordparameter in theosp-secretsecret:$ oc get secret osp-secret -o jsonpath='{.data.AdminPassword}' | base64 -d- Open a browser.
- Enter the Dashboard endpoint URL.
- Log in to the Dashboard with your username and password.
3.9. Enabling the Orchestration service (heat) Copy linkLink copied to clipboard!
You can enable the Orchestration service (heat) in your Red Hat OpenStack Services on OpenShift (RHOSO) environment. Cloud users can use the Orchestration service to create and manage cloud resources such as storage, networking, instances, or applications.
Procedure
-
Open your
OpenStackControlPlanecustom resource (CR) file,openstack_control_plane.yaml, on your workstation. Enable and configure the
heatservice:spec: ... heat: apiOverride: route: {} cnfAPIOverride: route: {} enabled: true template: databaseAccount: heat databaseInstance: openstack heatAPI: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer replicas: 1 resources: {} tls: api: internal: {} public: {} heatCfnAPI: override: {} replicas: 1 resources: {} tls: api: internal: {} public: {} heatEngine: replicas: 1 resources: {} memcachedInstance: memcached passwordSelectors: authEncryptionKey: HeatAuthEncryptionKey service: HeatPassword preserveJobs: false rabbitMqClusterName: rabbitmq secret: osp-secret serviceUser: heatUpdate the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup startedThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.Confirm that the control plane is deployed by reviewing the pods in the
openstacknamespace:$ oc get pods -n openstackThe control plane is deployed when all the pods are either completed or running.
Verification
Open a remote shell connection to the
OpenStackClientpod:$ oc rsh -n openstack openstackclientConfirm that the internal service endpoints are registered with each service:
$ openstack endpoint list -c 'Service Name' -c Interface -c URL --service heat +--------------+-----------+---------------------------------------------------------------+ | Service Name | Interface | URL | +--------------+-----------+---------------------------------------------------------------+ | heat | internal | http://heat-internal.openstack.svc:9292 | | heat | public | http://heat-public-openstack.apps.ostest.test.metalkube.org | +--------------+-----------+---------------------------------------------------------------+Exit the
openstackclientpod:$ exit
3.10. Customizing the OpenStackClient API version environment variables Copy linkLink copied to clipboard!
You can change the default OpenStackClient API versions for a Red Hat OpenStack on OpenShift (RHOSO) service by customizing the environment variables for the OpenStackClient pod.
Procedure
-
Open your
OpenStackControlPlanecustom resource (CR) file,openstack_control_plane.yaml, on your workstation. Add the
openstackclientspecification and define the name-value pairs for each environment variable you want to customize. Specify the environment variable by using the formatOS_<SERVICE>_API_VERSION. The following example customizes the environment variables for the Identity (keystone) and Compute (nova) services:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane namespace: openstack spec: ... openstackclient: template: env: - name: OS_IDENTITY_API_VERSION value: "3" - name: OS_COMPUTE_API_VERSION value: "2.95"Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until the Red Hat OpenShift Container Platform (RHOCP) creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup startedThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.Confirm that the control plane is deployed by reviewing the pods in the
openstacknamespace:$ oc get pods -n openstackThe control plane is deployed when all the pods are either completed or running.
Verification
Access the remote shell for the
OpenStackClientpod from your workstation:$ oc rsh -n openstack openstackclientVerify that your custom environment variables are set:
$ env |grep API_VERSION OS_COMPUTE_API_VERSION=2.95 OS_IDENTITY_API_VERSION=3Exit the
OpenStackClientpod:$ exit
3.11. Configuring DNS endpoints Copy linkLink copied to clipboard!
You can change the default DNS hostname of any Red Hat OpenStack Services on OpenShift (RHOSO) service that is exposed by a route and that supports the apiOverride field. You change the default DNS hostname of the service by using the apiOverride field to customize the hostname that is set for a route.
Procedure
-
Open your
OpenStackControlPlanecustom resource (CR) file,openstack_control_plane.yaml, on your workstation. Update the
apiOverridefield for the service to override the default route hostname with your custom API public endpoint:spec: ... cinder: enabled: true apiOverride: route: spec: host: mycinder.domain.nameNoteThe hostname must be resolved by the DNS service in your data center to which the RHOCP cluster and the DNS instance forwards their requests. You cannot use the internal RHOCP
coredns.Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until the Red Hat OpenShift Container Platform (RHOCP) creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup startedThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.Confirm that the control plane is deployed by reviewing the pods in the
openstacknamespace:$ oc get pods -n openstackThe control plane is deployed when all the pods are either completed or running.
Verification
Confirm that the route is created:
$ oc get route -n openstack cinder NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD cinder mycinder.domain.name cinder cinder reencrypt/Redirect NoneAccess the remote shell for the
OpenStackClientpod from your workstation:$ oc rsh -n openstack openstackclientVerify that the endpoint is updated:
$ openstack endpoint list --service cinderv3 --interface public +----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------+ | 5bc4760fa4944a14b1c052cc067b952c | regionOne | cinderv3 | volumev3 | True | public | https://mycinder.domain.name/v3 | +----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------+Exit the
OpenStackClientpod:$ exit
3.12. Additional resources Copy linkLink copied to clipboard!
- Kubernetes NMState Operator
- The Kubernetes NMState project
- Load balancing with MetalLB
- MetalLB documentation
- MetalLB in layer 2 mode
- Specify network interfaces that LB IP can be announced from
- Multiple networks
- Using the Multus CNI in OpenShift
- macvlan plugin
- whereabouts IPAM CNI plugin - Extended configuration
- Dynamic provisioning