Chapter 4. Creating the control plane
The Red Hat OpenStack Services on OpenShift (RHOSO) control plane contains the RHOSO services that manage the cloud. The RHOSO services run as a Red Hat OpenShift Container Platform (RHOCP) workload.
Creating the control plane also creates an OpenStackClient pod that you can access through a remote shell (rsh) to run OpenStack CLI commands.
4.1. Prerequisites Copy linkLink copied to clipboard!
-
The OpenStack Operator (
openstack-operator) is installed. - The RHOCP cluster is prepared for RHOSO networks.
The RHOCP cluster is not configured with any network policies that prevent communication between the
openstack-operatorsnamespace and the control plane namespace (defaultopenstack). Use the following command to check the existing network policies on the cluster:$ oc get networkpolicy -n openstackThis command returns the message "No resources found in openstack namespace" when there are no network policies. If this command returns a list of network policies, then check that they do not prevent communication between the
openstack-operatorsnamespace and the control plane namespace.-
You are logged on to a workstation that has access to the RHOCP cluster, as a user with
cluster-adminprivileges.
4.2. Creating the control plane Copy linkLink copied to clipboard!
You must define an OpenStackControlPlane custom resource (CR) to create the control plane and enable the Red Hat OpenStack Services on OpenShift (RHOSO) services.
The following procedure creates an initial control plane with the recommended configurations for each service. The procedure helps you quickly create an operating control plane environment that you can use to troubleshoot issues and test the environment before adding all the customizations you require. You can add service customizations to a deployed environment. For more information about how to customize your control plane after deployment, see the Customizing the Red Hat OpenStack Services on OpenShift deployment guide.
For an example OpenStackControlPlane CR, see Example OpenStackControlPlane CR.
Use the following commands to view the OpenStackControlPlane CRD definition and specification schema:
$ oc describe crd openstackcontrolplane
$ oc explain openstackcontrolplane.spec
Procedure
Create a file on your workstation named
openstack_control_plane.yamlto define theOpenStackControlPlaneCR:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane namespace: openstackSpecify the
SecretCR you created to provide secure access to the RHOSO service pods in Providing secure access to the Red Hat OpenStack Services on OpenShift services:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane namespace: openstack spec: secret: osp-secretSpecify the
storageClassyou created for your Red Hat OpenShift Container Platform (RHOCP) cluster storage back end:spec: secret: osp-secret storageClass: <RHOCP_storage_class>-
Replace
<RHOCP_storage_class>with the storage class you created for your RHOCP cluster storage back end. For information about storage classes, see Creating a storage class.
-
Replace
Add the global RabbitMQ settings
messagingBusandnotificationsBusto specify the default RabbitMQ cluster for RHOSO services:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: messagingBus: cluster: <rabbitmq-cluster> notificationsBus: cluster: <rabbitmq-cluster>-
Replace
<rabbitmq-cluster>with the default RabbitMQ cluster that all the RHOSO services use, in this example,rabbitmq. You can customize the RabbitMQ interface for OpenStack services. For more information, see Understand the RabbitMQ interface for OpenStack services in the Monitoring high availability services guide.
-
Replace
Add the following service configurations:
Note-
The following service examples use IP addresses from the default RHOSO MetalLB
IPAddressPoolrange for theloadBalancerIPsfield. Update theloadBalancerIPsfield with the IP address from the MetalLBIPAddressPoolrange that you created. - You cannot override the default public service endpoint. The public service endpoints are exposed as RHOCP routes by default, because only routes are supported for public endpoints.
Block Storage service (cinder):
cinder: apiOverride: route: {} template: databaseInstance: openstack secret: osp-secret cinderAPI: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer cinderScheduler: replicas: 1 cinderBackup: networkAttachments: - storage replicas: 0 cinderVolumes: volume1: networkAttachments: - storage replicas: 0-
cinderBackup.replicas: You can deploy the initial control plane without activating thecinderBackupservice. To deploy the service, you must set the number ofreplicasfor the service and configure the back end for the service. For information about the recommended replicas for each service and how to configure a back end for the Block Storage service and the backup service, see Configuring the Block Storage backup service in Configuring persistent storage. -
cinderVolumes.replicas: You can deploy the initial control plane without activating thecinderVolumesservice. To deploy the service, you must set the number ofreplicasfor the service and configure the back end for the service. For information about the recommended replicas for thecinderVolumesservice and how to configure a back end for the service, see Configuring the Block Storage volume service component in Configuring persistent storage.
-
Compute service (nova):
nova: apiOverride: route: {} template: apiServiceTemplate: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer metadataServiceTemplate: replicas: 3 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer schedulerServiceTemplate: replicas: 3 cellTemplates: cell0: cellDatabaseAccount: nova-cell0 cellDatabaseInstance: openstack messagingBus: cluster: rabbitmq hasAPIAccess: true cell1: cellDatabaseAccount: nova-cell1 cellDatabaseInstance: openstack-cell1 messagingBus: cluster: rabbitmq-cell1 noVNCProxyServiceTemplate: enabled: true networkAttachments: - ctlplane hasAPIAccess: true secret: osp-secretNoteA full set of Compute services (nova) are deployed by default for each of the default cells,
cell0andcell1:nova-api,nova-metadata,nova-scheduler, andnova-conductor. Thenovncproxyservice is also enabled forcell1by default.DNS service for the data plane:
dns: template: options: - key: server values: - <IP address for DNS server reachable from dnsmasq pod> override: service: metadata: annotations: metallb.universe.tf/address-pool: ctlplane metallb.universe.tf/allow-shared-ip: ctlplane metallb.universe.tf/loadBalancerIPs: 192.168.122.80 spec: type: LoadBalancer replicas: 2-
options: Defines thednsmasqinstances required for each DNS server by using key-value pairs. In this example, there is one key-value pair defined because there is only one DNS server configured to forward requests to. key: Specifies thednsmasqparameter to customize for the deployeddnsmasqinstance. Set to one of the following valid values:-
server -
rev-server -
srv-host -
txt-record -
ptr-record -
rebind-domain-ok -
naptr-record -
cname -
host-record -
caa-record -
dns-rr -
auth-zone -
synth-domain -
no-negcache -
local
-
values: Specifies the value for the DNS server reachable from thednsmasqpod on the RHOCP cluster network. You can specify a generic DNS server as the value, for example,1.1.1.1, or a DNS server for a specific domain, for example,/google.com/8.8.8.8.NoteThis DNS service,
dnsmasq, provides DNS services for nodes on the RHOSO data plane.dnsmasqis different from the RHOSO DNS service (designate) that provides DNS as a service for cloud tenants.
-
Identity service (keystone)
keystone: apiOverride: route: {} template: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer databaseInstance: openstack secret: osp-secret replicas: 3Image service (glance):
glance: apiOverrides: default: route: {} template: databaseInstance: openstack storage: storageRequest: 10G secret: osp-secret keystoneEndpoint: default glanceAPIs: default: replicas: 0 # Configure back end; set to 3 when deploying service override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer networkAttachments: - storage-
glanceAPIs.default.replicas: You can deploy the initial control plane without activating the Image service (glance). To deploy the Image service, you must set the number ofreplicasfor the service and configure the back end for the service. For information about the recommended replicas for the Image service and how to configure a back end for the service, see Configuring the Image service (glance) in Configuring persistent storage. If you do not deploy the Image service, you cannot upload images to the cloud or start an instance.
-
Key Management service (barbican):
barbican: apiOverride: route: {} template: databaseInstance: openstack secret: osp-secret barbicanAPI: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer barbicanWorker: replicas: 3 barbicanKeystoneListener: replicas: 1Networking service (neutron):
neutron: apiOverride: route: {} template: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer databaseInstance: openstack secret: osp-secret networkAttachments: - internalapiObject Storage service (swift):
swift: enabled: true proxyOverride: route: {} template: swiftProxy: networkAttachments: - storage override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer replicas: 2 secret: osp-secret swiftRing: ringReplicas: 3 swiftStorage: networkAttachments: - storage replicas: 3 storageRequest: 10GiOptimize service (watcher):
watcher: enabled: trueOVN:
ovn: template: ovnDBCluster: ovndbcluster-nb: replicas: 3 dbType: NB storageRequest: 10G networkAttachment: internalapi ovndbcluster-sb: replicas: 3 dbType: SB storageRequest: 10G networkAttachment: internalapi ovnNorthd: {}Placement service (placement):
placement: apiOverride: route: {} template: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer databaseInstance: openstack replicas: 3 secret: osp-secretOptional: Telemetry service (ceilometer, prometheus):
telemetry: enabled: true template: metricStorage: enabled: true dashboardsEnabled: true dataplaneNetwork: ctlplane networkAttachments: - ctlplane monitoringStack: alertingEnabled: true scrapeInterval: 30s storage: strategy: persistent retention: 24h persistent: pvcStorageRequest: 20G autoscaling: enabled: false aodh: databaseAccount: aodh databaseInstance: openstack passwordSelector: aodhService: AodhPassword serviceUser: aodh secret: osp-secret heatInstance: heat ceilometer: enabled: true secret: osp-secret logging: enabled: false-
telemetry.enabled: Set tofalseif your RHOSO environment does not require the Telemetry service. Whentrue, you must have installed the Cluster Observability Operator on your RHOCP cluster. -
telemetry.template.metricStorage.dataplaneNetwork: Defines the network that you use to scrape dataplanenode_exporterendpoints. -
telemetry.template.metricStorage.networkAttachments: Lists the networks that each service pod is attached to by using theNetworkAttachmentDefinitionresource names. You configure a NIC for the service for each network attachment that you specify. If you do not configure the isolated networks that each service pod is attached to, then the default pod network is used. You must create anetworkAttachmentthat matches the network that you specify as thedataplaneNetwork, so that Prometheus can scrape data from the dataplane nodes. -
telemetry.template.autoscaling: You must have theautoscalingfield present, even if autoscaling is disabled. For more information about autoscaling, see Autoscaling for Instances.
-
-
The following service examples use IP addresses from the default RHOSO MetalLB
Add the following service configurations to implement high availability (HA):
A MariaDB Galera cluster for use by all RHOSO services (
openstack), and a MariaDB Galera cluster for use by the Compute service forcell1(openstack-cell1):galera: templates: openstack: storageRequest: 5000M secret: osp-secret replicas: 3 openstack-cell1: storageRequest: 5000M secret: osp-secret replicas: 3A single memcached cluster that contains three memcached servers:
memcached: templates: memcached: replicas: 3A RabbitMQ cluster for use by all RHOSO services (
rabbitmq), and a RabbitMQ cluster for use by the Compute service forcell1(rabbitmq-cell1):rabbitmq: templates: rabbitmq: persistence: storage: <rabbitmq_cluster_storage> replicas: 3 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.85 spec: type: LoadBalancer rabbitmq-cell1: persistence: storage: <rabbitmq_cluster_storage> replicas: 3 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.86 spec: type: LoadBalancerReplace
<rabbitmq_cluster_storage>with sufficient storage for each RabbitMQ cluster, for example,10Gi.NoteYou cannot configure multiple RabbitMQ instances on the same virtual IP (VIP) address because all RabbitMQ instances use the same port. If you need to expose multiple RabbitMQ instances to the same network, then you must use distinct IP addresses.
By default, RabbitMQ uses
Quorumqueues to provide increased data safety and high availability at the expense of a slight increase in latency.WarningYou must not configure the RabbitMQ clusters of an existing RHOSO deployment to use
Quorumqueues! If you do so then your existing RHOSO services will not start or work properly.The RabbitMQ
Quorumqueue is a durable replicated queue based on the Raft consensus algorithm.NoteThe RabbitMQ
Quorumqueues must be saved to disk to ensure their durability.Quorumqueues can occupy significant disk space. Depending on the workload and settings the time taken to free up space after messages are consumed or expire can vary substantially. Ensure that sufficient storage is assigned to each RabbitMQ cluster.- Improved RabbitMQ failover behavior
The RabbitMQ service can be configured to provide an improved failover behavior that bypasses the default availability checks of OpenShift, which can wait up to five minutes to declare a service dead. By implementing this improved failover behavior, each OpenStack service checks to see if a RabbitMQ pod is alive and moves to the next pod if it is not.
NoteTo implement this improved failover behavior for a RabbitMQ cluster, you must reserve three free IP addresses from the default RHOSO MetalLB
IPAddressPoolrange already used for RabbitMQ.For example, if you want to improve the failover behavior of the
rabbitmq-cell1RabbitMQ cluster add the following lines to the end of therabbitmq.templates.rabbitmq-cell1section of theOpenStackControlPlaneCR:podOverride: services: - metadata: annotations: metallb.universe.tf/address-pool: "internalapi" metallb.universe.tf/loadBalancerIPs: "172.17.0.87" spec: type: LoadBalancer - metadata: annotations: metallb.universe.tf/address-pool: "internalapi" metallb.universe.tf/loadBalancerIPs: "172.17.0.88" spec: type: LoadBalancer ! - metadata: annotations: metallb.universe.tf/address-pool: "internalapi" metallb.universe.tf/loadBalancerIPs: "172.17.0.89" spec: type: LoadBalancer
Create the control plane:
$ oc create -f openstack_control_plane.yaml -n openstackWait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup startedThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.NoteCreating the control plane also creates an
OpenStackClientpod that you can access through a remote shell (rsh) to run OpenStack CLI commands.$ oc rsh -n openstack openstackclientOptional: Confirm that the control plane is deployed by reviewing the pods in the
openstacknamespace:$ oc get pods -n openstackThe control plane is deployed when all the pods are either completed or running.
Verification
Open a remote shell connection to the
OpenStackClientpod:$ oc rsh -n openstack openstackclientConfirm that the internal service endpoints are registered with each service:
$ openstack endpoint list -c 'Service Name' -c Interface -c URL --service glance +--------------+-----------+---------------------------------------------------------------+ | Service Name | Interface | URL | +--------------+-----------+---------------------------------------------------------------+ | glance | internal | https://glance-internal.openstack.svc | | glance | public | https://glance-default-public-openstack.apps.ostest.test.metalkube.org | +--------------+-----------+---------------------------------------------------------------+Exit the
OpenStackClientpod:$ exit
4.3. Example OpenStackControlPlane CR Copy linkLink copied to clipboard!
The following example OpenStackControlPlane CR is a complete control plane configuration that includes all the key services that must always be enabled for a successful deployment.
apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
name: openstack-control-plane
namespace: openstack
spec:
messagingBus:
cluster: rabbitmq
notificationsBus:
cluster: rabbitmq
secret: osp-secret
storageClass: your-RHOCP-storage-class
cinder:
apiOverride:
route: {}
template:
databaseInstance: openstack
secret: osp-secret
cinderAPI:
replicas: 3
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
cinderScheduler:
replicas: 1
cinderBackup:
networkAttachments:
- storage
replicas: 0 # backend needs to be configured to activate the service
cinderVolumes:
volume1:
networkAttachments:
- storage
replicas: 0 # backend needs to be configured to activate the service
nova:
apiOverride:
route: {}
template:
apiServiceTemplate:
replicas: 3
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
metadataServiceTemplate:
replicas: 3
override:
service:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
schedulerServiceTemplate:
replicas: 3
cellTemplates:
cell0:
cellDatabaseAccount: nova-cell0
cellDatabaseInstance: openstack
messagingBus:
cluster: rabbitmq
hasAPIAccess: true
cell1:
cellDatabaseAccount: nova-cell1
cellDatabaseInstance: openstack-cell1
messagingBus:
cluster: rabbitmq-cell1
noVNCProxyServiceTemplate:
enabled: true
networkAttachments:
- ctlplane
hasAPIAccess: true
secret: osp-secret
dns:
template:
options:
- key: server
values:
- 192.168.122.1
- key: server
values:
- 192.168.122.2
override:
service:
metadata:
annotations:
metallb.universe.tf/address-pool: ctlplane
metallb.universe.tf/allow-shared-ip: ctlplane
metallb.universe.tf/loadBalancerIPs: 192.168.122.80
spec:
type: LoadBalancer
replicas: 2
galera:
templates:
openstack:
storageRequest: 5000M
secret: osp-secret
replicas: 3
openstack-cell1:
storageRequest: 5000M
secret: osp-secret
replicas: 3
keystone:
apiOverride:
route: {}
template:
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
databaseInstance: openstack
secret: osp-secret
replicas: 3
glance:
apiOverrides:
default:
route: {}
template:
databaseInstance: openstack
storage:
storageRequest: 10G
secret: osp-secret
keystoneEndpoint: default
glanceAPIs:
default:
replicas: 0 # Configure back end; set to 3 when deploying service
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
networkAttachments:
- storage
barbican:
apiOverride:
route: {}
template:
databaseInstance: openstack
secret: osp-secret
barbicanAPI:
replicas: 3
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
barbicanWorker:
replicas: 3
barbicanKeystoneListener:
replicas: 1
memcached:
templates:
memcached:
replicas: 3
neutron:
apiOverride:
route: {}
template:
replicas: 3
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
databaseInstance: openstack
secret: osp-secret
networkAttachments:
- internalapi
swift:
enabled: true
proxyOverride:
route: {}
template:
swiftProxy:
networkAttachments:
- storage
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
replicas: 2
swiftRing:
ringReplicas: 3
swiftStorage:
networkAttachments:
- storage
replicas: 3
storageRequest: 100Gi
ovn:
template:
ovnDBCluster:
ovndbcluster-nb:
replicas: 3
dbType: NB
storageRequest: 10G
networkAttachment: internalapi
ovndbcluster-sb:
replicas: 3
dbType: SB
storageRequest: 10G
networkAttachment: internalapi
ovnNorthd: {}
ovnController:
networkAttachment: tenant
nicMappings:
my-network: nic1
placement:
apiOverride:
route: {}
template:
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
databaseInstance: openstack
replicas: 3
secret: osp-secret
rabbitmq:
templates:
rabbitmq:
persistence:
storage: 10Gi
replicas: 3
override:
service:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.85
spec:
type: LoadBalancer
rabbitmq-cell1:
persistence:
storage: 10Gi
replicas: 3
override:
service:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.86
spec:
type: LoadBalancer
telemetry:
enabled: true
template:
metricStorage:
enabled: true
dashboardsEnabled: true
dataplaneNetwork: ctlplane
networkAttachments:
- ctlplane
monitoringStack:
alertingEnabled: true
scrapeInterval: 30s
storage:
strategy: persistent
retention: 24h
persistent:
pvcStorageRequest: 20G
autoscaling:
enabled: false
aodh:
databaseAccount: aodh
databaseInstance: openstack
passwordSelector:
aodhService: AodhPassword
serviceUser: aodh
secret: osp-secret
heatInstance: heat
ceilometer:
enabled: true
secret: osp-secret
logging:
enabled: false
-
spec.storageClass: The storage class that you created for your Red Hat OpenShift Container Platform (RHOCP) cluster storage back end. -
spec.cinder: Service-specific parameters for the Block Storage service (cinder). -
spec.cinder.template.cinderBackup: The Block Storage service back end. For more information on configuring storage services, see the Configuring persistent storage guide. -
spec.cinder.template.cinderVolumes: The Block Storage service configuration. For more information on configuring storage services, see the Configuring persistent storage guide. spec.cinder.template.cinderVolumes.networkAttachments: The list of networks that each service pod is directly attached to, specified by using theNetworkAttachmentDefinitionresource names. A NIC is configured for the service for each specified network attachment.NoteIf you do not configure the isolated networks that each service pod is attached to, then the default pod network is used. For example, the Block Storage service uses the storage network to connect to a storage back end; the Identity service (keystone) uses an LDAP or Active Directory (AD) network; the
ovnDBClusterservice uses theinternalapinetwork; and theovnControllerservice uses thetenantnetwork.-
spec.nova: Service-specific parameters for the Compute service (nova). -
spec.nova.apiOverride: Service API route definition. You can customize the service route by using route-specific annotations. For more information, see Route-specific annotations in the RHOCP Networking guide. Setroute:to{}to apply the default route template. -
nicMappings: Pairs the physical network your gateway is on with the NIC that connects to the gateway network. This physical network is set in the neutron networkprovider:*namefield. You can, optionally, add more<network_name>:<nic_name>pairs as required. -
metallb.universe.tf/address-pool: The internal service API endpoint registered as a MetalLB service with theIPAddressPool internalapi. -
metallb.universe.tf/loadBalancerIPs: The virtual IP (VIP) address for the service. The IP is shared with other services by default. spec.rabbitmq: The RabbitMQ instances exposed to an isolated network with distinct IP addresses defined in theloadBalancerIPsannotation, as indicated in 11 and 12.NoteYou cannot configure multiple RabbitMQ instances on the same virtual IP (VIP) address because all RabbitMQ instances use the same port. If you need to expose multiple RabbitMQ instances to the same network, then you must use distinct IP addresses.
-
rabbitmq.override.service.metadata.annotations.metallb.universe.tf/loadBalancerIPs: The distinct IP address for a RabbitMQ instance that is exposed to an isolated network.
4.4. Removing a service from the control plane Copy linkLink copied to clipboard!
You can completely remove a service and the service database from the control plane after deployment by disabling the service. Many services are enabled by default, which means that the OpenStack Operator creates resources such as the service database and Identity service (keystone) users, even if no service pod is created because replicas is set to 0.
Remove a service with caution. Removing a service is not the same as stopping service pods. Removing a service is irreversible. Disabling a service removes the service database and any resources that referenced the service are no longer tracked. Create a backup of the service database before removing a service.
Procedure
-
Open the
OpenStackControlPlaneCR file on your workstation. Locate the service you want to remove from the control plane and disable it:
cinder: enabled: false apiOverride: route: {} ...Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until RHOCP removes the resource related to the disabled service. Run the following command to check the status:
$ oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup startedThe
OpenStackControlPlaneresource is updated with the disabled service when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.Optional: Confirm that the pods from the disabled service are no longer listed by reviewing the pods in the
openstacknamespace:$ oc get pods -n openstackCheck that the service is removed:
$ oc get cinder -n openstackThis command returns the following message when the service is successfully removed:
No resources found in openstack namespace.Check that the API endpoints for the service are removed from the Identity service (keystone):
$ oc rsh -n openstack openstackclient $ openstack endpoint list --service volumev3This command returns the following message when the API endpoints for the service are successfully removed:
No service with a type, name or ID of 'volumev3' exists.