Chapter 6. Creating the control plane
The Red Hat OpenStack Services on OpenShift (RHOSO) control plane contains the RHOSO services that manage the cloud. The RHOSO services run as a Red Hat OpenShift Container Platform (RHOCP) workload.
Creating the control plane also creates an OpenStackClient pod that you can access through a remote shell (rsh) to run OpenStack CLI commands.
6.1. Prerequisites Copy linkLink copied to clipboard!
-
The OpenStack Operator (
openstack-operator) is installed. For more information, see Installing and preparing the Operators. - The RHOCP cluster is prepared for RHOSO networks. For more information, see Preparing RHOCP for BGP networks.
The RHOCP cluster is not configured with any network policies that prevent communication between the
openstack-operatorsnamespace and the control plane namespace (defaultopenstack). Use the following command to check the existing network policies on the cluster:$ oc get networkpolicy -n openstack-
You are logged on to a workstation that has access to the RHOCP cluster, as a user with
cluster-adminprivileges.
6.2. Creating the control plane Copy linkLink copied to clipboard!
Define an OpenStackControlPlane custom resource (CR) to perform the following tasks:
- Create the control plane.
- Enable the Red Hat OpenStack Services on OpenShift (RHOSO) services.
The following procedure creates an initial control plane with the recommended configurations for each service. The procedure helps you quickly create an operating control plane environment that you can use to troubleshoot issues and test the environment before adding all the customizations you require. You can add service customizations to a deployed environment. For more information about how to customize your control plane after deployment, see Customizing the Red Hat OpenStack Services on OpenShift deployment.
For an example OpenStackControlPlane CR, see Example OpenStackControlPlane CR.
Use the following commands to view the OpenStackControlPlane CRD definition and specification schema:
$ oc describe crd openstackcontrolplane
$ oc explain openstackcontrolplane.spec
Procedure
Create a file on your workstation named
openstack_control_plane.yamlto define theOpenStackControlPlaneCR:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane namespace: openstackSpecify the
SecretCR you created to provide secure access to the RHOSO service pods in Providing secure access to the Red Hat OpenStack Services on OpenShift services:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane namespace: openstack spec: secret: osp-secretSpecify the
storageClassyou created for your Red Hat OpenShift Container Platform (RHOCP) cluster storage back end:spec: secret: osp-secret storageClass: <RHOCP_storage_class>-
Replace
<RHOCP_storage_class>with the storage class you created for your RHOCP cluster storage back end. For information about storage classes, see Creating a storage class.
-
Replace
Add the following service configurations:
NoteThe following service snippets use IP addresses from the default RHOSO MetalLB
IPAddressPoolrange for theloadBalancerIPsfield. Update theloadBalancerIPsfield with the IP address from the MetalLBIPAddressPoolrange that you created.- Block Storage service (cinder)
cinder: apiOverride: route: {} template: databaseInstance: openstack secret: osp-secret cinderAPI: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer cinderScheduler: replicas: 1 cinderBackup: networkAttachments: - storage replicas: 0 cinderVolumes: volume1: networkAttachments: - storage replicas: 0-
replicas- You can deploy the initial control plane without activating thecinderBackupservice or thecinderVolumesservice. To deploy the service, you must set the number ofreplicasfor the service and configure the back end for the service. For information see Configuring the Block Storage backup service and Configuring the Block Storage volume service component in Configuring persistent storage.
-
- Compute service (nova)
nova: apiOverride: route: {} template: apiServiceTemplate: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer metadataServiceTemplate: replicas: 3 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer schedulerServiceTemplate: replicas: 3 cellTemplates: cell0: cellDatabaseAccount: nova-cell0 cellDatabaseInstance: openstack cellMessageBusInstance: rabbitmq hasAPIAccess: true cell1: cellDatabaseAccount: nova-cell1 cellDatabaseInstance: openstack-cell1 cellMessageBusInstance: rabbitmq-cell1 noVNCProxyServiceTemplate: enabled: true hasAPIAccess: true secret: osp-secretNoteA full set of Compute services (nova) are deployed by default for each of the default cells,
cell0andcell1:nova-api,nova-metadata,nova-scheduler, andnova-conductor. Thenovncproxyservice is also enabled forcell1by default.- DNS service for the data plane
dns: template: options: - key: server values: - <IP address for DNS server reachable from dnsmasq pod> override: service: metadata: annotations: metallb.universe.tf/address-pool: ctlplane metallb.universe.tf/allow-shared-ip: ctlplane metallb.universe.tf/loadBalancerIPs: 192.168.122.80 spec: type: LoadBalancer replicas: 2-
options: Defines thednsmasqinstances required for each DNS server by using key-value pairs. In this example, there is one key-value pair defined because there is only one DNS server configured to forward requests to. key: Specifies thednsmasqparameter to customize for the deployeddnsmasqinstance. Set to one of the following valid values:-
server -
rev-server -
srv-host -
txt-record -
ptr-record -
rebind-domain-ok -
naptr-record -
cname -
host-record -
caa-record -
dns-rr -
auth-zone -
synth-domain -
no-negcache -
local
-
values: Specifies the value for the DNS server reachable from thednsmasqpod on the RHOCP cluster network. You can specify a generic DNS server as the value, for example,1.1.1.1, or a DNS server for a specific domain, for example,/google.com/8.8.8.8.NoteThis DNS service,
dnsmasq, provides DNS services for nodes on the RHOSO data plane.dnsmasqis different from the RHOSO DNS service (designate) that provides DNS as a service for cloud tenants.
-
- Identity service (keystone)
keystone: apiOverride: route: {} template: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer databaseInstance: openstack secret: osp-secret replicas: 3- Image service (glance)
glance: apiOverrides: default: route: {} template: databaseInstance: openstack storage: storageRequest: 10G secret: osp-secret keystoneEndpoint: default glanceAPIs: default: replicas: 0 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer networkAttachments: - storagereplicas- Set to0to configure the back end; set to 3 when deploying service.You can deploy the initial control plane without activating the Image service (glance). To deploy the Image service, you must set the number of
replicasfor the service and configure the back end for the service. For information about the recommended replicas for the Image service and how to configure a back end for the service, see Configuring the Image service (glance) in Configuring persistent storage. If you do not deploy the Image service, you cannot upload images to the cloud or start an instance.
- Key Management service (barbican)
barbican: apiOverride: route: {} template: databaseInstance: openstack secret: osp-secret barbicanAPI: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer barbicanWorker: replicas: 3 barbicanKeystoneListener: replicas: 1- Networking service (neutron)
neutron: apiOverride: route: {} template: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer databaseInstance: openstack secret: osp-secret networkAttachments: - internalapi- Object Storage service (swift)
swift: enabled: true proxyOverride: route: {} template: swiftProxy: networkAttachments: - storage override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer replicas: 2 secret: osp-secret swiftRing: ringReplicas: 3 swiftStorage: networkAttachments: - storage replicas: 3 storageRequest: 10Gi- OVN
ovn: template: ovnDBCluster: ovndbcluster-nb: replicas: 3 dbType: NB storageRequest: 10G networkAttachment: internalapi ovndbcluster-sb: replcas: 3 dbType: SB storageRequest: 10G networkAttachment: internalapi ovnNorthd: {}- Placement service (placement)
placement: apiOverride: route: {} template: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer databaseInstance: openstack replicas: 3 secret: osp-secret- Telemetry service (ceilometer, prometheus)
telemetry: enabled: true template: metricStorage: enabled: true dashboardsEnabled: true monitoringStack: alertingEnabled: true scrapeInterval: 30s storage: strategy: persistent retention: 24h persistent: pvcStorageRequest: 20G autoscaling: enabled: false aodh: databaseAccount: aodh databaseInstance: openstack passwordSelector: aodhService: AodhPassword rabbitMqClusterName: rabbitmq serviceUser: aodh secret: osp-secret heatInstance: heat ceilometer: enabled: true secret: osp-secret logging: enabled: false-
autoscaling- You must have theautoscalingfield present, even if autoscaling is disabled. For more information about autoscaling, see Autoscaling for Instances.
-
Add the following service configurations to implement high availability (HA):
- MariaDB Galera cluster
A MariaDB Galera cluster for use by all RHOSO services (
openstack), and a MariaDB Galera cluster for use by the Compute service forcell1(openstack-cell1):galera: templates: openstack: storageRequest: 5000M secret: osp-secret replicas: 3 openstack-cell1: storageRequest: 5000M secret: osp-secret replicas: 3- memcached cluster
A single memcached cluster that contains three memcached servers:
memcached: templates: memcached: replicas: 3- RabbitMQ cluster
A RabbitMQ cluster for use by all RHOSO services (
rabbitmq), and a RabbitMQ cluster for use by the Compute service forcell1(rabbitmq-cell1):rabbitmq: templates: rabbitmq: replicas: 3 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.85 spec: type: LoadBalancer rabbitmq-cell1: replicas: 3 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.86 spec: type: LoadBalancerNoteMultiple RabbitMQ instances cannot share the same VIP as they use the same port. If you need to expose multiple RabbitMQ instances to the same network, then you must use distinct IP addresses.
Create the control plane:
$ oc create -f openstack_control_plane.yaml -n openstackWait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstack- Sample output
NAME STATUS MESSAGE openstack-control-plane Unknown Setup startedThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.NoteCreating the control plane also creates an
OpenStackClientpod that you can access through a remote shell (rsh) to run OpenStack CLI commands.$ oc rsh -n openstack openstackclient
Optional: Confirm that the control plane is deployed by reviewing the pods in the
openstacknamespace:$ oc get pods -n openstackThe control plane is deployed when all the pods are either completed or running.
Verification
Open a remote shell connection to the
OpenStackClientpod:$ oc rsh -n openstack openstackclientConfirm that the internal service endpoints are registered with each service:
$ openstack endpoint list -c 'Service Name' -c Interface -c URL --service glance- Sample output
+--------------+-----------+---------------------------------------------------------------+ | Service Name | Interface | URL | +--------------+-----------+---------------------------------------------------------------+ | glance | internal | https://glance-internal.openstack.svc | | glance | public | https://glance-default-public-openstack.apps.ostest.test.metalkube.org | +--------------+-----------+---------------------------------------------------------------+
Exit the
OpenStackClientpod:$ exit
6.3. Example OpenStackControlPlane CR Copy linkLink copied to clipboard!
The following example OpenStackControlPlane CR is a complete control plane configuration that includes all the key services that must always be enabled for a successful deployment.
apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
name: openstack-control-plane
namespace: openstack
spec:
secret: osp-secret
storageClass: your-RHOCP-storage-class
cinder:
apiOverride:
route: {}
template:
databaseInstance: openstack
secret: osp-secret
cinderAPI:
replicas: 3
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
cinderScheduler:
replicas: 1
cinderBackup:
networkAttachments:
- storage
replicas: 0 # backend needs to be configured to activate the service
cinderVolumes:
volume1:
networkAttachments:
- storage
replicas: 0 # backend needs to be configured to activate the service
nova:
apiOverride:
route: {}
template:
apiServiceTemplate:
replicas: 3
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
metadataServiceTemplate:
replicas: 3
override:
service:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
schedulerServiceTemplate:
replicas: 3
cellTemplates:
cell0:
cellDatabaseAccount: nova-cell0
cellDatabaseInstance: openstack
cellMessageBusInstance: rabbitmq
hasAPIAccess: true
cell1:
cellDatabaseAccount: nova-cell1
cellDatabaseInstance: openstack-cell1
cellMessageBusInstance: rabbitmq-cell1
noVNCProxyServiceTemplate:
enabled: true
hasAPIAccess: true
secret: osp-secret
dns:
template:
options:
- key: server
values:
- 192.168.122.1
- key: server
values:
- 192.168.122.2
override:
service:
metadata:
annotations:
metallb.universe.tf/address-pool: ctlplane
metallb.universe.tf/allow-shared-ip: ctlplane
metallb.universe.tf/loadBalancerIPs: 192.168.122.80
spec:
type: LoadBalancer
replicas: 2
galera:
templates:
openstack:
storageRequest: 5000M
secret: osp-secret
replicas: 3
openstack-cell1:
storageRequest: 5000M
secret: osp-secret
replicas: 3
keystone:
apiOverride:
route: {}
template:
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
databaseInstance: openstack
secret: osp-secret
replicas: 3
glance:
apiOverrides:
default:
route: {}
template:
databaseInstance: openstack
storage:
storageRequest: 10G
secret: osp-secret
keystoneEndpoint: default
glanceAPIs:
default:
replicas: 0 # Configure back end; set to 3 when deploying service
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
networkAttachments:
- storage
barbican:
apiOverride:
route: {}
template:
databaseInstance: openstack
secret: osp-secret
barbicanAPI:
replicas: 3
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
barbicanWorker:
replicas: 3
barbicanKeystoneListener:
replicas: 1
memcached:
templates:
memcached:
replicas: 3
neutron:
apiOverride:
route: {}
template:
replicas: 3
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
databaseInstance: openstack
secret: osp-secret
networkAttachments:
- internalapi
swift:
enabled: true
proxyOverride:
route: {}
template:
swiftProxy:
networkAttachments:
- storage
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
replicas: 2
swiftRing:
ringReplicas: 3
swiftStorage:
networkAttachments:
- storage
replicas: 3
storageRequest: 100Gi
ovn:
template:
ovnDBCluster:
ovndbcluster-nb:
replicas: 3
dbType: NB
storageRequest: 10G
networkAttachment: internalapi
ovndbcluster-sb:
replicas: 3
dbType: SB
storageRequest: 10G
networkAttachment: internalapi
ovnNorthd: {}
placement:
apiOverride:
route: {}
template:
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
databaseInstance: openstack
replicas: 3
secret: osp-secret
rabbitmq:
templates:
rabbitmq:
replicas: 3
override:
service:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.85
spec:
type: LoadBalancer
rabbitmq-cell1:
replicas: 3
override:
service:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.86
spec:
type: LoadBalancer
telemetry:
enabled: true
template:
metricStorage:
enabled: true
dashboardsEnabled: true
monitoringStack:
alertingEnabled: true
scrapeInterval: 30s
storage:
strategy: persistent
retention: 24h
persistent:
pvcStorageRequest: 20G
autoscaling:
enabled: false
aodh:
databaseAccount: aodh
databaseInstance: openstack
passwordSelector:
aodhService: AodhPassword
rabbitMqClusterName: rabbitmq
serviceUser: aodh
secret: osp-secret
heatInstance: heat
ceilometer:
enabled: true
secret: osp-secret
logging:
enabled: false
-
storageClass- The storage class that you created for your Red Hat OpenShift Container Platform (RHOCP) cluster storage back end. -
cinder- Service-specific parameters for the Block Storage service (cinder). -
cinderBackup- The Block Storage service back end. For more information on configuring storage services, see the Configuring persistent storage guide. -
cinderVolumes- The Block Storage service configuration. For more information on configuring storage services, see the Configuring persistent storage guide. networkAttachments- The list of networks that each service pod is directly attached to, specified by using theNetworkAttachmentDefinitionresource names. A NIC is configured for the service for each specified network attachment.NoteIf you do not configure the isolated networks that each service pod is attached to, then the default pod network is used. For example, the Block Storage service uses the storage network to connect to a storage back end; the Identity service (keystone) uses an LDAP or Active Directory (AD) network; the
ovnDBClusterservice uses theinternalapinetwork; and theovnControllerservice uses thetenantnetwork.-
nova- Service-specific parameters for the Compute service (nova). -
apiOverride- Service API route definition. You can customize the service route by using route-specific annotations. For more information, see Route-specific annotations in the RHOCP Networking guide. Setroute:to{}to apply the default route template. -
metallb.universe.tf/address-pool: internalapi- The internal service API endpoint registered as a MetalLB service with theIPAddressPool internalapi. -
metallb.universe.tf/loadBalancerIPs: 172.17.0.80- The virtual IP (VIP) address for the service. The IP is shared with other services by default. rabbitmq- The RabbitMQ instances exposed to an isolated network with distinct IP addresses defined in theloadBalancerIPsannotation, as indicated in 11 and 12.NoteMultiple RabbitMQ instances cannot share the same VIP as they use the same port. If you need to expose multiple RabbitMQ instances to the same network, then you must use distinct IP addresses.
-
metallb.universe.tf/loadBalancerIPs: 172.17.0.85- The distinct IP address for a RabbitMQ instance that is exposed to an isolated network. -
metallb.universe.tf/loadBalancerIPs: 172.17.0.86- The distinct IP address for a RabbitMQ instance that is exposed to an isolated network.
6.4. Removing a service from the control plane Copy linkLink copied to clipboard!
You can completely remove a service and the service database from the control plane after deployment by disabling the service. Many services are enabled by default, which means that the OpenStack Operator creates resources such as the service database and Identity service (keystone) users, even if no service pod is created because replicas is set to 0.
Remove a service with caution. Removing a service is not the same as stopping service pods. Removing a service is irreversible. Disabling a service removes the service database and any resources that referenced the service are no longer tracked. Create a backup of the service database before removing a service.
Procedure
-
Open the
OpenStackControlPlaneCR file on your workstation. Locate the service you want to remove from the control plane and disable it:
cinder: enabled: false apiOverride: route: {} ...Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until RHOCP removes the resource related to the disabled service. Run the following command to check the status:
$ oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup startedThe
OpenStackControlPlaneresource is updated with the disabled service when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.Optional: Confirm that the pods from the disabled service are no longer listed by reviewing the pods in the
openstacknamespace:$ oc get pods -n openstackCheck that the service is removed:
$ oc get cinder -n openstackThis command returns the following message when the service is successfully removed:
No resources found in openstack namespace.Check that the API endpoints for the service are removed from the Identity service (keystone):
$ oc rsh -n openstack openstackclient $ openstack endpoint list --service volumev3This command returns the following message when the API endpoints for the service are successfully removed:
No service with a type, name or ID of 'volumev3' exists.
6.5. Additional resources Copy linkLink copied to clipboard!
- Kubernetes NMState Operator
- The Kubernetes NMState project
- Load balancing with MetalLB
- MetalLB documentation
- MetalLB in layer 2 mode
- Specify network interfaces that LB IP can be announced from
- Multiple networks
- Using the Multus CNI in OpenShift
- macvlan plugin
- whereabouts IPAM CNI plugin - Extended configuration
- Dynamic provisioning
- Configuring the Block Storage backup service
- Configuring the Image service (glance)