Chapter 3. Customizing the control plane
The Red Hat OpenStack Services on OpenShift (RHOSO) control plane contains the RHOSO services that manage the cloud. The RHOSO services run as a Red Hat OpenShift Container Platform (RHOCP) workload. You can customize your deployed control plane with the services required for your environment.
3.1. Prerequisites Copy linkLink copied to clipboard!
- The RHOSO environment is deployed on a RHOCP cluster. For more information, see Deploying Red Hat OpenStack Services on OpenShift.
-
You are logged on to a workstation that has access to the RHOCP cluster, as a user with
cluster-admin
privileges.
3.2. Enabling disabled services Copy linkLink copied to clipboard!
If you enable a service that is disabled by setting enabled: true
, you must either create an empty template for the service by adding template: {}
to the service definition to ensure that the default values for the service are set, or specify some or all of the template parameter values. For example, to enable the Dashboard service (horizon) with the default service values, add the following configuration to your OpenStackControlPlane
custom resource (CR):
If you want to set the values for specific service parameters, then add the following configuration to your OpenStackControlPlane
custom resource (CR):
Any parameters that you do not specify are set to the default value from the service template.
3.3. Controlling service pod placement with Topology CRs Copy linkLink copied to clipboard!
By default, the OpenStack Operator deploys Red Hat OpenStack Services on OpenShift (RHOSO) services on any worker node. You can control the placement of each RHOSO service pod by creating Topology
custom resources (CRs). You can apply a Topology
CR at the top level of the OpenStackControlPlane
CR to specify the default pod spread policy for the control plane. You can also override the default spread policy in the specification of each service in the OpenStackControlPlane
CR.
Procedure
Create a file on your workstation that defines a
Topology
CR that spreads the service pods across worker nodes, for example,default_ctlplane_topology.yaml
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
metadata.name
: The name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), start and end with an alphanumeric character. topologySpreadConstraints.whenUnsatisfiable
: Specifies how the scheduler handles a pod if it does not satisfy the spread constraint:-
DoNotSchedule
: Instructs the scheduler not to schedule the pod. This is the default behavior. To ensure the deployment has high availability (HA), set the HA servicesrabbitmq
andgalera
toDoNotSchedule
. -
ScheduleAnyway
: Instructs the scheduler to schedule the pod in any location, but to give higher precedence to topologies that minimize the skew. If you set HA services toScheduleAnyway
, then when the spread constraint cannot be satisfied, the pod is placed on a different host worker node. You must then move the pod manually to the correct host once the host is operational. For more information about how to manually move pods, see Controlling pod placement onto nodes (scheduling) in RHOCP Nodes.
-
-
topologySpreadConstraints.matchLabelKeys
: An optional field that specifies the label keys to use to group the pods that the affinity rules are applied to. Use this field to ensure that the affinity rules are applied only to pods from the samestatefulset
ordeployment
resource when scheduling. ThematchLabelKeys
field enables the resource to be updated with new pods and the spread constraint rules to be applied to only the new set of pods.
-
Create a file on your workstation that defines a
Topology
CR that enforces strict spread constraints for HA service pods, for example,ha_ctlplane_topology.yaml
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
Topology
CRs:oc create -f default_ctlplane_topology.yaml oc create -f ha_ctlplane_topology.yaml
$ oc create -f default_ctlplane_topology.yaml $ oc create -f ha_ctlplane_topology.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Open your
OpenStackControlPlane
CR file on your workstation. Specify that the service pods, when created, are spread across the worker nodes in your control plane:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the specifications for the
rabbitmq
andgalera
services to ensure that the HA service pods, when created, are only placed on a worker node when the spread constraint can be satisfied:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup started
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.Verify that the service pods are running on the correct worker nodes.
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional resources
3.4. Adding Compute cells to the control plane Copy linkLink copied to clipboard!
You can use cells to divide Compute nodes in large deployments into groups. Each cell has a dedicated message queue, runs standalone copies of the cell-specific Compute services and databases, and stores instance metadata in a database dedicated to instances in that cell.
By default, the control plane creates two cells:
-
cell0
: The controller cell that manages global components and services, such as the Compute scheduler and the global conductor. This cell also contains a dedicated database to store information about instances that failed to be scheduled to a Compute node. You cannot connect Compute nodes to this cell. -
cell1
: The default cell that Compute nodes are connected to when you don’t create and configure additional cells.
You can add cells to your Red Hat OpenStack Services on OpenShift (RHOSO) environment when you create your control plane or at any time afterwards. The following procedure adds one additional cell, cell2
, and configures each cell with a dedicated nova
metadata API service. Creating a dedicated nova
metadata API service for each cell improves the performance of large deployments and the scalability of your environment. Alternatively, you can deploy one nova
metadata API service on the top level that serves all the cells.
Procedure
-
Open your
OpenStackControlPlane
custom resource (CR) file,openstack_control_plane.yaml
, on your workstation. Create a database server for each new cell that you want to add to your RHOSO environment:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
templates.openstack
: The database used by most of the RHOSO services, including the Compute servicesnova-api
andnova-scheduler
, andcell0
. -
templates.openstack-cell1
: The database to be used bycell1
. -
templates.openstack-cell2
: The database to be used bycell2
.
-
Create a message bus with unique IPs for the load balancer for each new cell that you want to add to your RHOSO environment:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
rabbitmq.rabbitmq
: The message bus used by most of the RHOSO services, including the Compute servicesnova-api
andnova-scheduler
, andcell0
. -
rabbitmq.rabbitmq-cell1
: The message bus to be used bycell1
. -
rabbitmq.rabbitmq-cell2
: The message bus to be used bycell2
.
-
Optional: Override the default VNC proxy service route hostname with your custom API public endpoint:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe hostname must be resolved by the DNS service in your data center to which the RHOCP cluster and the DNS instance forwards their requests. You cannot use the internal RHOCP
coredns
.Add the new cells to the
cellTemplates
configuration in thenova
service configuration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
template.metadataServiceTemplate.enabled
: Disables the singlenova
metadata API service that serves all the cells. If you want to have just onenova
metadata API service that serves all the cells, then set this field totrue
and remove configuration for themetadata
service from each cell. template.cellTemplates.cell2
: The name of the new Compute cell. The name has a limit of 20 characters, and must contain only lowercase alphanumeric characters and the-
symbol. For more information about the properties you can configure for a cell, view the definition for theNova
CRD:oc describe crd nova
$ oc describe crd nova
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup started
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.Optional: Confirm that the control plane is deployed by reviewing the pods in the
openstack
namespace for each of the cells you created:oc get pods -n openstack | grep cell2
$ oc get pods -n openstack | grep cell2 nova-cell2-conductor-0 1/1 Running 2 5d20h nova-cell2-novncproxy-0 1/1 Running 2 5d20h openstack-cell2-galera-0 1/1 Running 2 5d20h rabbitmq-cell2-server-0 1/1 Running 2 5d20h
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The control plane is deployed when all the pods are either completed or running.
Optional: Confirm that the new cells are created:
oc exec -it nova-cell0-conductor-0 /bin/bash nova-manage cell_v2 list_cells
$ oc exec -it nova-cell0-conductor-0 /bin/bash # nova-manage cell_v2 list_cells +-------+--------------------------------------+----------------------------------------------------------------------------------+------------------------------------------------------------+----------+| Name | UUID | Transport URL | Database Connection | Disabled |+-------+--------------------------------------+----------------------------------------------------------------------------------+------------------------------------------------------------+----------+| cell0 | 00000000-0000-0000-0000-000000000000 | rabbit: | mysql+pymysql://nova_cell0:****@openstack/nova_cell0 | False || cell1 | c5bf5e35-6677-40aa-80d0-33a440cac14e | rabbit://default_user_CuUVnXz-PvgzXvPxypU:****@rabbitmq-cell1.openstack.svc:5672 | mysql+pymysql://nova_cell1:****@openstack-cell1/nova_cell1 | False || cell2 | c5bf5e35-6677-40aa-80d0-33a440cac14e | rabbit://default_user_CuUVnXz-PvgzXvPxypU:****@rabbitmq-cell2.openstack.svc:5672 | mysql+pymysql://nova_cell2:****@openstack-cell2/nova_cell2| False |+-------+--------------------------------------+----------------------------------------------------------------------------------+------------------------------------------------------------+----------+
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.5. Removing a Compute cell from the control plane Copy linkLink copied to clipboard!
You can remove a cell from the control plane to release control plane resources. To remove a cell, you delete the references to the cell in your OpenStackControlPlane
custom resource (CR) and then delete the related secrets and CRs.
You cannot remove cell0 from the control plane.
Procedure
-
Open your
OpenStackControlPlane
CR file,openstack_control_plane.yaml
, on your workstation. Remove the cell definition from the
cellTemplates
. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<cellname>
with the name of the cell you are removing and delete the line.
-
Replace
Delete the cell-specific RabbitMQ definition from the
OpenStackControlPlane
CR. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the cell-specific Galera definition from the
OpenStackControlPlane
CR. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait until Red Hat OpenShift Cluster Platform (RHOCP) creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.-
Optional: Confirm that the control plane is deployed by reviewing the pods in the
openstack
namespace for each of the cells you created. The control plane is deployed when all the pods are either completed or running. Ensure that the cell you deleted is not present in the output.
Verification
Open a remote shell connection to the
OpenStackClient
pod:oc rsh -n openstack openstackclient
$ oc rsh -n openstack openstackclient
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that the internal service endpoints are registered with each service:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Exit the
OpenStackClient
pod:exit
$ exit
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.6. Configuring Compute notifications Copy linkLink copied to clipboard!
You can configure the Compute service (nova) to provide notifications to Telemetry services in your Red Hat OpenStack Services on OpenShift (RHOSO) environment. The Compute service supports designating one RabbitMQ replica as a notification server.
Prerequisites
-
You have the
oc
command line tool installed on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-admin
privileges.
Procedure
-
Open your
OpenStackControlPlane
custom resource (CR) file,openstack_control_plane.yaml
, on your workstation. Update the
rabbitmq
service configuration to provide Compute service notifications.The following example creates a single notifications server:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<rabbitmq_notification_server>
with the name of your notification server, for example,rabbitmq-notifications
. -
Replace
<ip_address>
with the appropriate IP address. Adjust this value accordingly based on your networking plan and configuration.
-
Replace
Register the notification server with the Compute service:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<rabbitmq_notification_server>
with the name of the notification server created in the previous step.
-
Replace
-
Save
openstack_control_plane.yaml
. Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the -w option to the end of the
get
command to track deployment progress.-
Optional: Confirm that the control plane is deployed by reviewing the pods in the
openstack
namespace for each of the cells you created. The control plane is deployed when all the pods are either completed or running. Optional: Verify the notification
transporturl
is properly configured.The following is an example of performing this verification:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new
OpenStackDataPlaneDeployment
CR to configure notifications on the data plane nodes and deploy the data plane. Save the CR to a file namedcompute_notifications_deploy.yaml
on your workstation:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<nodeSet_name>
with the names of theOpenStackDataPlaneNodeSet
CRs that you want to include in your data plane deployment.NoteThe following example demonstrates how to obtain information about your data plane deployment and use that information to obtain a list of nodesets.
oc get openstackdataplanedeployment oc get openstackdataplanedeployment edpm-deployment -o jsonpath='{.spec.nodeSets}'["openstack-edpm"]
$ oc get openstackdataplanedeployment NAME NODESETS STATUS MESSAGE edpm-deployment ["openstack-edpm"] True Setup complete $ oc get openstackdataplanedeployment edpm-deployment -o jsonpath='{.spec.nodeSets}'["openstack-edpm"]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Save the
compute_notifications_deploy.yaml
deployment file. Deploy the data plane updates:
oc create -f compute_notifications_deploy.yaml
$ oc create -f compute_notifications_deploy.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the data plane is deployed:
oc get openstackdataplanenodeset
$ oc get openstackdataplanenodeset NAME STATUS MESSAGE nova-notifications True Deployed
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Access the remote shell for openstackclient and verify that the deployed Compute nodes are visible on the control plane:
oc rsh -n openstack openstackclient openstack hypervisor list
$ oc rsh -n openstack openstackclient $ openstack hypervisor list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.7. Enabling the Dashboard service (horizon) interface Copy linkLink copied to clipboard!
You can enable the Dashboard service (horizon) interface for cloud user access to the cloud through a browser.
Procedure
-
Open your
OpenStackControlPlane
custom resource (CR) file,openstack_control_plane.yaml
, on your workstation. Enable the
horizon
service:spec: ... horizon: enabled: true
spec: ... horizon: enabled: true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Override the default route hostname for the
horizon
service with your custom API public endpoint:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe hostname must be resolved by the DNS service in your data center to which the RHOCP cluster and the DNS instance forwards their requests. You cannot use the internal RHOCP
coredns
.Configure the
horizon
service:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
horizon.template.replicas
: Setreplicas
to a minimum of2
for high availability.
-
Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup started
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.Confirm that the control plane is deployed by reviewing the pods in the
openstack
namespace:oc get pods -n openstack
$ oc get pods -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The control plane is deployed when all the pods are either completed or running.
Retrieve the Dashboard service endpoint URL:
oc get horizons horizon -o jsonpath='{.status.endpoint}'
$ oc get horizons horizon -o jsonpath='{.status.endpoint}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use this URL to access the Horizon interface.
Verification
To log in as the
admin
user, obtain theadmin
password from theAdminPassword
parameter in theosp-secret
secret:oc get secret osp-secret -o jsonpath='{.data.AdminPassword}' | base64 -d
$ oc get secret osp-secret -o jsonpath='{.data.AdminPassword}' | base64 -d
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Open a browser.
- Enter the Dashboard endpoint URL.
- Log in to the Dashboard with your username and password.
3.8. Enabling the Orchestration service (heat) Copy linkLink copied to clipboard!
You can enable the Orchestration service (heat) in your Red Hat OpenStack Services on OpenShift (RHOSO) environment. Cloud users can use the Orchestration service to create and manage cloud resources such as storage, networking, instances, or applications.
Procedure
-
Open your
OpenStackControlPlane
custom resource (CR) file,openstack_control_plane.yaml
, on your workstation. Enable and configure the
heat
service:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup started
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.Confirm that the control plane is deployed by reviewing the pods in the
openstack
namespace:oc get pods -n openstack
$ oc get pods -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The control plane is deployed when all the pods are either completed or running.
Verification
Open a remote shell connection to the
OpenStackClient
pod:oc rsh -n openstack openstackclient
$ oc rsh -n openstack openstackclient
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that the internal service endpoints are registered with each service:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Exit the
openstackclient
pod:exit
$ exit
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.9. Customizing the OpenStackClient API version environment variables Copy linkLink copied to clipboard!
You can change the default OpenStackClient API versions for a Red Hat OpenStack on OpenShift (RHOSO) service by customizing the environment variables for the OpenStackClient
pod.
Procedure
-
Open your
OpenStackControlPlane
custom resource (CR) file,openstack_control_plane.yaml
, on your workstation. Add the
openstackclient
specification and define the name-value pairs for each environment variable you want to customize. Specify the environment variable by using the formatOS_<SERVICE>_API_VERSION
. The following example customizes the environment variables for the Identity (keystone) and Compute (nova) services:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait until the Red Hat OpenShift Container Platform (RHOCP) creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup started
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.Confirm that the control plane is deployed by reviewing the pods in the
openstack
namespace:oc get pods -n openstack
$ oc get pods -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The control plane is deployed when all the pods are either completed or running.
Verification
Access the remote shell for the
OpenStackClient
pod from your workstation:oc rsh -n openstack openstackclient
$ oc rsh -n openstack openstackclient
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that your custom environment variables are set:
env |grep API_VERSION
$ env |grep API_VERSION OS_COMPUTE_API_VERSION=2.95 OS_IDENTITY_API_VERSION=3
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Exit the
OpenStackClient
pod:exit
$ exit
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.10. Configuring DNS endpoints Copy linkLink copied to clipboard!
You can change the default DNS hostname of any Red Hat OpenStack Services on OpenShift (RHOSO) service that is exposed by a route and that supports the apiOverride
field. You change the default DNS hostname of the service by using the apiOverride
field to customize the hostname that is set for a route.
Procedure
-
Open your
OpenStackControlPlane
custom resource (CR) file,openstack_control_plane.yaml
, on your workstation. Update the
apiOverride
field for the service to override the default route hostname with your custom API public endpoint:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe hostname must be resolved by the DNS service in your data center to which the RHOCP cluster and the DNS instance forwards their requests. You cannot use the internal RHOCP
coredns
.Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait until the Red Hat OpenShift Container Platform (RHOCP) creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup started
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.Confirm that the control plane is deployed by reviewing the pods in the
openstack
namespace:oc get pods -n openstack
$ oc get pods -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The control plane is deployed when all the pods are either completed or running.
Verification
Confirm that the route is created:
oc get route -n openstack cinder
$ oc get route -n openstack cinder NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD cinder mycinder.domain.name cinder cinder reencrypt/Redirect None
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Access the remote shell for the
OpenStackClient
pod from your workstation:oc rsh -n openstack openstackclient
$ oc rsh -n openstack openstackclient
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the endpoint is updated:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Exit the
OpenStackClient
pod:exit
$ exit
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.11. Additional resources Copy linkLink copied to clipboard!
- Kubernetes NMState Operator
- The Kubernetes NMState project
- Load balancing with MetalLB
- MetalLB documentation
- MetalLB in layer 2 mode
- Specify network interfaces that LB IP can be announced from
- Multiple networks
- Using the Multus CNI in OpenShift
- macvlan plugin
- whereabouts IPAM CNI plugin - Extended configuration
- About advertising for the IP address pools
- Dynamic provisioning