Chapter 3. Customizing the control plane
To customize the Red Hat OpenStack Services on OpenShift (RHOSO) control plane for your environment, you can add, remove, or configure the RHOSO services that run on the control plane according to your requirements.
3.1. Prerequisites Copy linkLink copied to clipboard!
- The RHOSO environment is deployed on a RHOCP cluster. For more information, see Deploying Red Hat OpenStack Services on OpenShift.
-
You are logged on to a workstation that has access to the RHOCP cluster, as a user with
cluster-adminprivileges.
3.2. Enabling disabled services Copy linkLink copied to clipboard!
If you enable a service that is disabled by setting enabled: true, you must either create an empty template for the service by adding template: {} to the service definition to ensure that the default values for the service are set, or specify some or all of the template parameter values. For example, to enable the Dashboard service (horizon) with the default service values, add the following configuration to your OpenStackControlPlane custom resource (CR):
If you want to set the values for specific service parameters, then add the following configuration to your OpenStackControlPlane custom resource (CR):
Any parameters that you do not specify are set to the default value from the service template.
3.3. Configuring authentication for the memcached service Copy linkLink copied to clipboard!
You can configure the cache maintained by the memcached service to require authentication. Authentication increases the security of your cloud by restricting access to the cached data of your cloud. By default, no authentication is required.
Prerequisites
-
You have the
occommand line tool installed on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-adminprivileges.
You can set the authentication mode for the memcached service on your control plane. The following authentication modes are supported:
None- No authentication is required. The client does not require a certificate from the server to proceed with the connection. This mode has the least impact upon performance but does not provide any security. This is the default authentication mode.
Request- Authentication is attempted. The client requests a certificate from the server but does not strictly require it to proceed with the connection. This mode provides flexible connections but weaker security.
Require- Authentication is enforced. The client demands that the server present a valid and verifiable certificate. If the server fails to provide one, or if the client cannot successfully validate the certificate against its trusted Certificate Authorities (CAs), the connection fails. This mode provides strong security at the cost of brittle connections.
Procedure
-
Open your
OpenStackControlPlanecustom resource (CR) file,openstack_control_plane.yaml, on your workstation. Update the memcached service with the required authentication mode:
The following example configures the default memcached cluster (
memcached):Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<authentication_mode>with the required authentication mode, eitherRequestorRequire.
-
Replace
-
Save
openstack_control_plane.yaml. Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow Wait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the -w option to the end of the
getcommand to track deployment progress.
3.4. Controlling service pod placement with Topology CRs Copy linkLink copied to clipboard!
By default, the OpenStack Operator deploys Red Hat OpenStack Services on OpenShift (RHOSO) services on any worker node. You can control the placement of each RHOSO service pod to optimize the performance of your deployment by creating Topology custom resources (CRs).
You can apply a Topology CR at the top level of the OpenStackControlPlane CR to specify the default pod spread policy for the control plane. You can also override the default spread policy in the specification of each service in the OpenStackControlPlane CR.
Procedure
Create a file on your workstation that defines a
TopologyCR that spreads the service pods across worker nodes, for example,default_ctlplane_topology.yaml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
metadata.name: The name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), start and end with an alphanumeric character. topologySpreadConstraints.whenUnsatisfiable: Specifies how the scheduler handles a pod if it does not satisfy the spread constraint:-
DoNotSchedule: Instructs the scheduler not to schedule the pod. This is the default behavior. To ensure the deployment has high availability (HA), set the HA servicesrabbitmqandgaleratoDoNotSchedule. -
ScheduleAnyway: Instructs the scheduler to schedule the pod in any location, but to give higher precedence to topologies that minimize the skew. If you set HA services toScheduleAnyway, then when the spread constraint cannot be satisfied, the pod is placed on a different host worker node. You must then move the pod manually to the correct host once the host is operational. For more information about how to manually move pods, see Controlling pod placement onto nodes (scheduling) in RHOCP Nodes.
-
-
topologySpreadConstraints.matchLabelKeys: An optional field that specifies the label keys to use to group the pods that the affinity rules are applied to. Use this field to ensure that the affinity rules are applied only to pods from the samestatefulsetordeploymentresource when scheduling. ThematchLabelKeysfield enables the resource to be updated with new pods and the spread constraint rules to be applied to only the new set of pods.
-
Create a file on your workstation that defines a
TopologyCR that enforces strict spread constraints for HA service pods, for example,ha_ctlplane_topology.yaml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
TopologyCRs:oc create -f default_ctlplane_topology.yaml oc create -f ha_ctlplane_topology.yaml
$ oc create -f default_ctlplane_topology.yaml $ oc create -f ha_ctlplane_topology.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Open your
OpenStackControlPlaneCR file on your workstation. Specify that the service pods, when created, are spread across the worker nodes in your control plane:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the specifications for the
rabbitmqandgaleraservices to ensure that the HA service pods, when created, are only placed on a worker node when the spread constraint can be satisfied:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow Wait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup started
$ oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup startedCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.Verify that the service pods are running on the correct worker nodes.
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional resources
3.5. Adding Compute cells to the control plane Copy linkLink copied to clipboard!
You can add Compute cells to your Red Hat OpenStack Services on OpenShift (RHOSO) environment to manage Compute nodes in groups and improve the performance and scalability of large deployments. Each cell has a dedicated message queue, runs standalone copies of the cell-specific Compute services and databases, and stores instance metadata in a database dedicated to instances in that cell.
By default, the control plane creates two cells:
-
cell0: The controller cell that manages global components and services, such as the Compute scheduler and the global conductor. This cell also contains a dedicated database to store information about instances that failed to be scheduled to a Compute node. You cannot connect Compute nodes to this cell. -
cell1: The default cell that Compute nodes are connected to when you don’t create and configure additional cells.
You can add cells to your Red Hat OpenStack Services on OpenShift (RHOSO) environment when you create your control plane or at any time afterwards. The following procedure adds one additional cell, cell2, and configures each cell with a dedicated nova metadata API service. Creating a dedicated nova metadata API service for each cell improves the performance of large deployments and the scalability of your environment. Alternatively, you can deploy one nova metadata API service on the top level that serves all the cells.
Procedure
-
Open your
OpenStackControlPlanecustom resource (CR) file,openstack_control_plane.yaml, on your workstation. Create a database server for each new cell that you want to add to your RHOSO environment:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
templates.openstack: The database used by most of the RHOSO services, including the Compute servicesnova-apiandnova-scheduler, andcell0. -
templates.openstack-cell1: The database to be used bycell1. -
templates.openstack-cell2: The database to be used bycell2.
-
Create a message bus with unique IPs for the load balancer for each new cell that you want to add to your RHOSO environment:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
rabbitmq.rabbitmq: The message bus used by most of the RHOSO services, including the Compute servicesnova-apiandnova-scheduler, andcell0. -
rabbitmq.rabbitmq-cell1: The message bus to be used bycell1. -
rabbitmq.rabbitmq-cell2: The message bus to be used bycell2.
-
Optional: Override the default VNC proxy service route hostname with your custom API public endpoint:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe hostname must be resolved by the DNS service in your data center to which the RHOCP cluster and the DNS instance forwards their requests. You cannot use the internal RHOCP
coredns.Add the new cells to the
cellTemplatesconfiguration in thenovaservice configuration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
template.metadataServiceTemplate.enabled: Disables the singlenovametadata API service that serves all the cells. If you want to have just onenovametadata API service that serves all the cells, then set this field totrueand remove configuration for themetadataservice from each cell. template.cellTemplates.cell2: The name of the new Compute cell. The name has a limit of 20 characters, and must contain only lowercase alphanumeric characters and the-symbol. For more information about the properties you can configure for a cell, view the definition for theNovaCRD:oc describe crd nova
$ oc describe crd novaCopy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow Wait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup started
$ oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup startedCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.Optional: Confirm that the control plane is deployed by reviewing the pods in the
openstacknamespace for each of the cells you created:oc get pods -n openstack | grep cell2 nova-cell2-conductor-0 1/1 Running 2 5d20h nova-cell2-novncproxy-0 1/1 Running 2 5d20h openstack-cell2-galera-0 1/1 Running 2 5d20h rabbitmq-cell2-server-0 1/1 Running 2 5d20h
$ oc get pods -n openstack | grep cell2 nova-cell2-conductor-0 1/1 Running 2 5d20h nova-cell2-novncproxy-0 1/1 Running 2 5d20h openstack-cell2-galera-0 1/1 Running 2 5d20h rabbitmq-cell2-server-0 1/1 Running 2 5d20hCopy to Clipboard Copied! Toggle word wrap Toggle overflow The control plane is deployed when all the pods are either completed or running.
Optional: Confirm that the new cells are created:
oc exec -it nova-cell0-conductor-0 /bin/bash nova-manage cell_v2 list_cells +-------+--------------------------------------+----------------------------------------------------------------------------------+------------------------------------------------------------+----------+| Name | UUID | Transport URL | Database Connection | Disabled |+-------+--------------------------------------+----------------------------------------------------------------------------------+------------------------------------------------------------+----------+| cell0 | 00000000-0000-0000-0000-000000000000 | rabbit: | mysql+pymysql://nova_cell0:****@openstack/nova_cell0 | False || cell1 | c5bf5e35-6677-40aa-80d0-33a440cac14e | rabbit://default_user_CuUVnXz-PvgzXvPxypU:****@rabbitmq-cell1.openstack.svc:5672 | mysql+pymysql://nova_cell1:****@openstack-cell1/nova_cell1 | False || cell2 | c5bf5e35-6677-40aa-80d0-33a440cac14e | rabbit://default_user_CuUVnXz-PvgzXvPxypU:****@rabbitmq-cell2.openstack.svc:5672 | mysql+pymysql://nova_cell2:****@openstack-cell2/nova_cell2| False |+-------+--------------------------------------+----------------------------------------------------------------------------------+------------------------------------------------------------+----------+
$ oc exec -it nova-cell0-conductor-0 /bin/bash # nova-manage cell_v2 list_cells +-------+--------------------------------------+----------------------------------------------------------------------------------+------------------------------------------------------------+----------+| Name | UUID | Transport URL | Database Connection | Disabled |+-------+--------------------------------------+----------------------------------------------------------------------------------+------------------------------------------------------------+----------+| cell0 | 00000000-0000-0000-0000-000000000000 | rabbit: | mysql+pymysql://nova_cell0:****@openstack/nova_cell0 | False || cell1 | c5bf5e35-6677-40aa-80d0-33a440cac14e | rabbit://default_user_CuUVnXz-PvgzXvPxypU:****@rabbitmq-cell1.openstack.svc:5672 | mysql+pymysql://nova_cell1:****@openstack-cell1/nova_cell1 | False || cell2 | c5bf5e35-6677-40aa-80d0-33a440cac14e | rabbit://default_user_CuUVnXz-PvgzXvPxypU:****@rabbitmq-cell2.openstack.svc:5672 | mysql+pymysql://nova_cell2:****@openstack-cell2/nova_cell2| False |+-------+--------------------------------------+----------------------------------------------------------------------------------+------------------------------------------------------------+----------+Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.6. Removing a Compute cell from the control plane Copy linkLink copied to clipboard!
You can remove a cell from the control plane to release control plane resources. To remove a cell, you delete the references to the cell in your OpenStackControlPlane custom resource (CR) and then delete the related secrets and CRs.
You cannot remove cell0 from the control plane.
Procedure
-
Open your
OpenStackControlPlaneCR file,openstack_control_plane.yaml, on your workstation. Remove the cell definition from the
cellTemplates. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<cellname>with the name of the cell you are removing and delete the line.
-
Replace
Delete the cell-specific RabbitMQ definition from the
OpenStackControlPlaneCR. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the cell-specific Galera definition from the
OpenStackControlPlaneCR. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow Wait until Red Hat OpenShift Cluster Platform (RHOCP) creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.-
Optional: Confirm that the control plane is deployed by reviewing the pods in the
openstacknamespace for each of the cells you created. The control plane is deployed when all the pods are either completed or running. Ensure that the cell you deleted is not present in the output.
Verification
Open a remote shell connection to the
OpenStackClientpod:oc rsh -n openstack openstackclient
$ oc rsh -n openstack openstackclientCopy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that the internal service endpoints are registered with each service:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Exit the
OpenStackClientpod:exit
$ exitCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.7. Configuring Compute notifications Copy linkLink copied to clipboard!
You can configure the Compute service (nova) to provide notifications to Telemetry services in your Red Hat OpenStack Services on OpenShift (RHOSO) environment. The Compute service supports designating one RabbitMQ replica as a notification server.
Prerequisites
-
You have the
occommand line tool installed on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-adminprivileges.
Procedure
-
Open your
OpenStackControlPlanecustom resource (CR) file,openstack_control_plane.yaml, on your workstation. Update the
rabbitmqservice configuration to provide Compute service notifications.The following example creates a single notifications server:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<rabbitmq_notification_server>with the name of your notification server, for example,rabbitmq-notifications. -
Replace
<ip_address>with the appropriate IP address. Adjust this value accordingly based on your networking plan and configuration.
-
Replace
Register the notification server with the Compute service:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<rabbitmq_notification_server>with the name of the notification server created in the previous step.
-
Replace
-
Save
openstack_control_plane.yaml. Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow Wait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the -w option to the end of the
getcommand to track deployment progress.-
Optional: Confirm that the control plane is deployed by reviewing the pods in the
openstacknamespace for each of the cells you created. The control plane is deployed when all the pods are either completed or running. Optional: Verify the notification
transporturlis properly configured.The following is an example of performing this verification:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new
OpenStackDataPlaneDeploymentCR to configure notifications on the data plane nodes and deploy the data plane. Save the CR to a file namedcompute_notifications_deploy.yamlon your workstation:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<nodeSet_name>with the names of theOpenStackDataPlaneNodeSetCRs that you want to include in your data plane deployment.NoteThe following example demonstrates how to obtain information about your data plane deployment and use that information to obtain a list of nodesets.
oc get openstackdataplanedeployment NAME NODESETS STATUS MESSAGE edpm-deployment ["openstack-edpm"] True Setup complete oc get openstackdataplanedeployment edpm-deployment -o jsonpath='{.spec.nodeSets}'["openstack-edpm"]$ oc get openstackdataplanedeployment NAME NODESETS STATUS MESSAGE edpm-deployment ["openstack-edpm"] True Setup complete $ oc get openstackdataplanedeployment edpm-deployment -o jsonpath='{.spec.nodeSets}'["openstack-edpm"]Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Save the
compute_notifications_deploy.yamldeployment file. Deploy the data plane updates:
oc create -f compute_notifications_deploy.yaml
$ oc create -f compute_notifications_deploy.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the data plane is deployed:
oc get openstackdataplanenodeset NAME STATUS MESSAGE nova-notifications True Deployed
$ oc get openstackdataplanenodeset NAME STATUS MESSAGE nova-notifications True DeployedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Access the remote shell for openstackclient and verify that the deployed Compute nodes are visible on the control plane:
oc rsh -n openstack openstackclient openstack hypervisor list
$ oc rsh -n openstack openstackclient $ openstack hypervisor listCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.8. Enabling the Dashboard service (horizon) interface Copy linkLink copied to clipboard!
You can enable the Dashboard service (horizon) interface for cloud user access to the cloud through a browser.
Procedure
-
Open your
OpenStackControlPlanecustom resource (CR) file,openstack_control_plane.yaml, on your workstation. Enable the
horizonservice:spec: ... horizon: enabled: truespec: ... horizon: enabled: trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Override the default route hostname for the
horizonservice with your custom API public endpoint:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe hostname must be resolved by the DNS service in your data center to which the RHOCP cluster and the DNS instance forwards their requests. You cannot use the internal RHOCP
coredns.Configure the
horizonservice:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
horizon.template.replicas: Setreplicasto a minimum of2for high availability.
-
Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow Wait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup started
$ oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup startedCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.Confirm that the control plane is deployed by reviewing the pods in the
openstacknamespace:oc get pods -n openstack
$ oc get pods -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow The control plane is deployed when all the pods are either completed or running.
Retrieve the Dashboard service endpoint URL:
oc get horizons horizon -o jsonpath='{.status.endpoint}'$ oc get horizons horizon -o jsonpath='{.status.endpoint}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use this URL to access the Horizon interface.
Verification
To log in as the
adminuser, obtain theadminpassword from theAdminPasswordparameter in theosp-secretsecret:oc get secret osp-secret -o jsonpath='{.data.AdminPassword}' | base64 -d$ oc get secret osp-secret -o jsonpath='{.data.AdminPassword}' | base64 -dCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Open a browser.
- Enter the Dashboard endpoint URL.
- Log in to the Dashboard with your username and password.
3.9. Enabling the Orchestration service (heat) Copy linkLink copied to clipboard!
You can enable the Orchestration service (heat) in your Red Hat OpenStack Services on OpenShift (RHOSO) environment. Cloud users can use the Orchestration service to create and manage cloud resources such as storage, networking, instances, or applications.
Procedure
-
Open your
OpenStackControlPlanecustom resource (CR) file,openstack_control_plane.yaml, on your workstation. Enable and configure the
heatservice:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow Wait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup started
$ oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup startedCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.Confirm that the control plane is deployed by reviewing the pods in the
openstacknamespace:oc get pods -n openstack
$ oc get pods -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow The control plane is deployed when all the pods are either completed or running.
Verification
Open a remote shell connection to the
OpenStackClientpod:oc rsh -n openstack openstackclient
$ oc rsh -n openstack openstackclientCopy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that the internal service endpoints are registered with each service:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Exit the
openstackclientpod:exit
$ exitCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.10. Customizing the OpenStackClient API version environment variables Copy linkLink copied to clipboard!
You can change the default OpenStackClient API versions for a Red Hat OpenStack on OpenShift (RHOSO) service by customizing the environment variables for the OpenStackClient pod.
Procedure
-
Open your
OpenStackControlPlanecustom resource (CR) file,openstack_control_plane.yaml, on your workstation. Add the
openstackclientspecification and define the name-value pairs for each environment variable you want to customize. Specify the environment variable by using the formatOS_<SERVICE>_API_VERSION. The following example customizes the environment variables for the Identity (keystone) and Compute (nova) services:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow Wait until the Red Hat OpenShift Container Platform (RHOCP) creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup started
$ oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup startedCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.Confirm that the control plane is deployed by reviewing the pods in the
openstacknamespace:oc get pods -n openstack
$ oc get pods -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow The control plane is deployed when all the pods are either completed or running.
Verification
Access the remote shell for the
OpenStackClientpod from your workstation:oc rsh -n openstack openstackclient
$ oc rsh -n openstack openstackclientCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that your custom environment variables are set:
env |grep API_VERSION OS_COMPUTE_API_VERSION=2.95 OS_IDENTITY_API_VERSION=3
$ env |grep API_VERSION OS_COMPUTE_API_VERSION=2.95 OS_IDENTITY_API_VERSION=3Copy to Clipboard Copied! Toggle word wrap Toggle overflow Exit the
OpenStackClientpod:exit
$ exitCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.11. Configuring DNS endpoints Copy linkLink copied to clipboard!
You can change the default DNS hostname of any Red Hat OpenStack Services on OpenShift (RHOSO) service that is exposed by a route and that supports the apiOverride field. You change the default DNS hostname of the service by using the apiOverride field to customize the hostname that is set for a route.
Procedure
-
Open your
OpenStackControlPlanecustom resource (CR) file,openstack_control_plane.yaml, on your workstation. Update the
apiOverridefield for the service to override the default route hostname with your custom API public endpoint:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe hostname must be resolved by the DNS service in your data center to which the RHOCP cluster and the DNS instance forwards their requests. You cannot use the internal RHOCP
coredns.Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow Wait until the Red Hat OpenShift Container Platform (RHOCP) creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup started
$ oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup startedCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.Confirm that the control plane is deployed by reviewing the pods in the
openstacknamespace:oc get pods -n openstack
$ oc get pods -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow The control plane is deployed when all the pods are either completed or running.
Verification
Confirm that the route is created:
oc get route -n openstack cinder NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD cinder mycinder.domain.name cinder cinder reencrypt/Redirect None
$ oc get route -n openstack cinder NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD cinder mycinder.domain.name cinder cinder reencrypt/Redirect NoneCopy to Clipboard Copied! Toggle word wrap Toggle overflow Access the remote shell for the
OpenStackClientpod from your workstation:oc rsh -n openstack openstackclient
$ oc rsh -n openstack openstackclientCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the endpoint is updated:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Exit the
OpenStackClientpod:exit
$ exitCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.12. Additional resources Copy linkLink copied to clipboard!
- Kubernetes NMState Operator
- The Kubernetes NMState project
- Load balancing with MetalLB
- MetalLB documentation
- MetalLB in layer 2 mode
- Specify network interfaces that LB IP can be announced from
- Multiple networks
- Using the Multus CNI in OpenShift
- macvlan plugin
- whereabouts IPAM CNI plugin - Extended configuration
- Dynamic provisioning