Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 5. Creating the control plane
The Red Hat OpenStack Services on OpenShift (RHOSO) control plane contains the RHOSO services that manage the cloud. The RHOSO services run as a Red Hat OpenShift Container Platform (RHOCP) workload.
Creating the control plane also creates an OpenStackClient
pod that you can access through a remote shell (rsh
) to run OpenStack CLI commands.
5.1. Prerequisites Link kopierenLink in die Zwischenablage kopiert!
-
The OpenStack Operator (
openstack-operator
) is installed. For more information, see Installing and preparing the Operators. - The RHOCP cluster is prepared for RHOSO networks. For more information, see Preparing RHOCP for RHOSO networks.
The RHOCP cluster is not configured with any network policies that prevent communication between the
openstack-operators
namespace and the control plane namespace (defaultopenstack
). Use the following command to check the existing network policies on the cluster:oc get networkpolicy -n openstack
$ oc get networkpolicy -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command returns the message "No resources found in openstack namespace" when there are no network policies. If this command returns a list of network policies, then check that they do not prevent communication between the
openstack-operators
namespace and the control plane namespace. For more information about network policies, see Network security in the RHOCP Networking guide.-
You are logged on to a workstation that has access to the RHOCP cluster, as a user with
cluster-admin
privileges.
5.2. Creating the control plane Link kopierenLink in die Zwischenablage kopiert!
Define an OpenStackControlPlane
custom resource (CR) to perform the following tasks:
- Create the control plane.
- Enable the Red Hat OpenStack Services on OpenShift (RHOSO) services.
The following procedure creates an initial control plane with the recommended configurations for each service. The procedure helps you quickly create an operating control plane environment that you can use to troubleshoot issues and test the environment before adding all the customizations you require. You can add service customizations to a deployed environment. For more information about how to customize your control plane after deployment, see the Customizing the Red Hat OpenStack Services on OpenShift deployment guide.
For an example OpenStackControlPlane
CR, see Example OpenStackControlPlane
CR.
Use the following commands to view the OpenStackControlPlane
CRD definition and specification schema:
oc describe crd openstackcontrolplane oc explain openstackcontrolplane.spec
$ oc describe crd openstackcontrolplane
$ oc explain openstackcontrolplane.spec
Procedure
Create a file on your workstation named
openstack_control_plane.yaml
to define theOpenStackControlPlane
CR:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane namespace: openstack
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane namespace: openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the
Secret
CR you created to provide secure access to the RHOSO service pods in Providing secure access to the Red Hat OpenStack Services on OpenShift services:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the
storageClass
you created for your Red Hat OpenShift Container Platform (RHOCP) cluster storage back end:spec: secret: osp-secret storageClass: <RHOCP_storage_class>
spec: secret: osp-secret storageClass: <RHOCP_storage_class>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<RHOCP_storage_class>
with the storage class you created for your RHOCP cluster storage back end. For information about storage classes, see Creating a storage class.
-
Replace
Add the following service configurations:
Note-
The following service examples use IP addresses from the default RHOSO MetalLB
IPAddressPool
range for theloadBalancerIPs
field. Update theloadBalancerIPs
field with the IP address from the MetalLBIPAddressPool
range that you created. - You cannot override the default public service endpoint. The public service endpoints are exposed as RHOCP routes by default, because only routes are supported for public endpoints.
Block Storage service (cinder):
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
cinderBackup.replicas
: You can deploy the initial control plane without activating thecinderBackup
service. To deploy the service, you must set the number ofreplicas
for the service and configure the back end for the service. For information about the recommended replicas for each service and how to configure a back end for the Block Storage service and the backup service, see Configuring the Block Storage backup service in Configuring persistent storage. -
cinderVolumes.replicas
: You can deploy the initial control plane without activating thecinderVolumes
service. To deploy the service, you must set the number ofreplicas
for the service and configure the back end for the service. For information about the recommended replicas for thecinderVolumes
service and how to configure a back end for the service, see Configuring the volume service in Configuring persistent storage.
-
Compute service (nova):
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteA full set of Compute services (nova) are deployed by default for each of the default cells,
cell0
andcell1
:nova-api
,nova-metadata
,nova-scheduler
, andnova-conductor
. Thenovncproxy
service is also enabled forcell1
by default.DNS service for the data plane:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
dns.template.options
: Defines thednsmasq
instances required for each DNS server by using key-value pairs. In this example, there are two key-value pairs defined because there are two DNS servers configured to forward requests to. dns.template.options.key
: Specifies thednsmasq
parameter to customize for the deployeddnsmasq
instance. Set to one of the following valid values:-
server
-
rev-server
-
srv-host
-
txt-record
-
ptr-record
-
rebind-domain-ok
-
naptr-record
-
cname
-
host-record
-
caa-record
-
dns-rr
-
auth-zone
-
synth-domain
-
no-negcache
-
local
-
dns.template.options.values
: Specifies the values for thednsmasq
parameter. You can specify a generic DNS server as the value, for example,1.1.1.1
, or a DNS server for a specific domain, for example,/google.com/8.8.8.8
.NoteThis DNS service,
dnsmasq
, provides DNS services for nodes on the RHOSO data plane.dnsmasq
is different from the RHOSO DNS service (designate) that provides DNS as a service for cloud tenants.
-
Identity service (keystone)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Image service (glance):
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
glanceAPIs.default.replicas
: You can deploy the initial control plane without activating the Image service (glance). To deploy the Image service, you must set the number ofreplicas
for the service and configure the back end for the service. For information about the recommended replicas for the Image service and how to configure a back end for the service, see Configuring the Image service (glance) in Configuring persistent storage. If you do not deploy the Image service, you cannot upload images to the cloud or start an instance.
-
Key Management service (barbican):
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Networking service (neutron):
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Object Storage service (swift):
Copy to Clipboard Copied! Toggle word wrap Toggle overflow OVN:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Placement service (placement):
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Telemetry service (ceilometer, prometheus):
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
telemetry.template.metricStorage.dataplaneNetwork
: Defines the network that you use to scrape dataplanenode_exporter
endpoints. -
telemetry.template.metricStorage.networkAttachments
: Lists the networks that each service pod is attached to by using theNetworkAttachmentDefinition
resource names. You configure a NIC for the service for each network attachment that you specify. If you do not configure the isolated networks that each service pod is attached to, then the default pod network is used. You must create anetworkAttachment
that matches the network that you specify as thedataplaneNetwork
, so that Prometheus can scrape data from the dataplane nodes. -
telemetry.template.autoscaling
: You must have theautoscaling
field present, even if autoscaling is disabled. For more information about autoscaling, see Autoscaling for Instances.
-
-
The following service examples use IP addresses from the default RHOSO MetalLB
Add the following service configurations to implement high availability (HA):
A MariaDB Galera cluster for use by all RHOSO services (
openstack
), and a MariaDB Galera cluster for use by the Compute service forcell1
(openstack-cell1
):Copy to Clipboard Copied! Toggle word wrap Toggle overflow A single memcached cluster that contains three memcached servers:
memcached: templates: memcached: replicas: 3
memcached: templates: memcached: replicas: 3
Copy to Clipboard Copied! Toggle word wrap Toggle overflow A RabbitMQ cluster for use by all RHOSO services (
rabbitmq
), and a RabbitMQ cluster for use by the Compute service forcell1
(rabbitmq-cell1
):Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou cannot configure multiple RabbitMQ instances on the same virtual IP (VIP) address because all RabbitMQ instances use the same port. If you need to expose multiple RabbitMQ instances to the same network, then you must use distinct IP addresses.
Create the control plane:
oc create -f openstack_control_plane.yaml -n openstack
$ oc create -f openstack_control_plane.yaml -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup started
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.NoteCreating the control plane also creates an
OpenStackClient
pod that you can access through a remote shell (rsh
) to run OpenStack CLI commands.oc rsh -n openstack openstackclient
$ oc rsh -n openstack openstackclient
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Confirm that the control plane is deployed by reviewing the pods in the
openstack
namespace:oc get pods -n openstack
$ oc get pods -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The control plane is deployed when all the pods are either completed or running.
Verification
Open a remote shell connection to the
OpenStackClient
pod:oc rsh -n openstack openstackclient
$ oc rsh -n openstack openstackclient
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that the internal service endpoints are registered with each service:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Exit the
OpenStackClient
pod:exit
$ exit
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3. Example OpenStackControlPlane CR Link kopierenLink in die Zwischenablage kopiert!
The following example OpenStackControlPlane
CR is a complete control plane configuration that includes all the key services that must always be enabled for a successful deployment.
-
spec.storageClass
: The storage class that you created for your Red Hat OpenShift Container Platform (RHOCP) cluster storage back end. -
spec.cinder
: Service-specific parameters for the Block Storage service (cinder). -
spec.cinder.template.cinderBackup
: The Block Storage service back end. For more information on configuring storage services, see the Configuring persistent storage guide. -
spec.cinder.template.cinderVolumes
: The Block Storage service configuration. For more information on configuring storage services, see the Configuring persistent storage guide. spec.cinder.template.cinderVolumes.networkAttachments
: The list of networks that each service pod is directly attached to, specified by using theNetworkAttachmentDefinition
resource names. A NIC is configured for the service for each specified network attachment.NoteIf you do not configure the isolated networks that each service pod is attached to, then the default pod network is used. For example, the Block Storage service uses the storage network to connect to a storage back end; the Identity service (keystone) uses an LDAP or Active Directory (AD) network; the
ovnDBCluster
service uses theinternalapi
network; and theovnController
service uses thetenant
network.-
spec.nova
: Service-specific parameters for the Compute service (nova). -
spec.nova.apiOverride
: Service API route definition. You can customize the service route by using route-specific annotations. For more information, see Route-specific annotations in the RHOCP Networking guide. Setroute:
to{}
to apply the default route template. -
metallb.universe.tf/address-pool
: The internal service API endpoint registered as a MetalLB service with theIPAddressPool internalapi
. -
metallb.universe.tf/loadBalancerIPs
: The virtual IP (VIP) address for the service. The IP is shared with other services by default. spec.rabbitmq
: The RabbitMQ instances exposed to an isolated network with distinct IP addresses defined in theloadBalancerIPs
annotation, as indicated in 11 and 12.NoteYou cannot configure multiple RabbitMQ instances on the same virtual IP (VIP) address because all RabbitMQ instances use the same port. If you need to expose multiple RabbitMQ instances to the same network, then you must use distinct IP addresses.
-
rabbitmq.override.service.metadata.annotations.metallb.universe.tf/loadBalancerIPs
: The distinct IP address for a RabbitMQ instance that is exposed to an isolated network.
5.4. Removing a service from the control plane Link kopierenLink in die Zwischenablage kopiert!
You can completely remove a service and the service database from the control plane after deployment by disabling the service. Many services are enabled by default, which means that the OpenStack Operator creates resources such as the service database and Identity service (keystone) users, even if no service pod is created because replicas
is set to 0
.
Remove a service with caution. Removing a service is not the same as stopping service pods. Removing a service is irreversible. Disabling a service removes the service database and any resources that referenced the service are no longer tracked. Create a backup of the service database before removing a service.
Procedure
-
Open the
OpenStackControlPlane
CR file on your workstation. Locate the service you want to remove from the control plane and disable it:
cinder: enabled: false apiOverride: route: {} ...
cinder: enabled: false apiOverride: route: {} ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait until RHOCP removes the resource related to the disabled service. Run the following command to check the status:
oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup started
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
OpenStackControlPlane
resource is updated with the disabled service when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.Optional: Confirm that the pods from the disabled service are no longer listed by reviewing the pods in the
openstack
namespace:oc get pods -n openstack
$ oc get pods -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the service is removed:
oc get cinder -n openstack
$ oc get cinder -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command returns the following message when the service is successfully removed:
No resources found in openstack namespace.
No resources found in openstack namespace.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the API endpoints for the service are removed from the Identity service (keystone):
oc rsh -n openstack openstackclient openstack endpoint list --service volumev3
$ oc rsh -n openstack openstackclient $ openstack endpoint list --service volumev3
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command returns the following message when the API endpoints for the service are successfully removed:
No service with a type, name or ID of 'volumev3' exists.
No service with a type, name or ID of 'volumev3' exists.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.5. Additional resources Link kopierenLink in die Zwischenablage kopiert!
- Kubernetes NMState Operator
- The Kubernetes NMState project
- Load balancing with MetalLB
- MetalLB documentation
- MetalLB in layer 2 mode
- Specify network interfaces that LB IP can be announced from
- Multiple networks
- Using the Multus CNI in OpenShift
- macvlan plugin
- whereabouts IPAM CNI plugin - Extended configuration
- About advertising for the IP address pools
- Dynamic provisioning
- Configuring the Block Storage backup service
- Configuring the Image service (glance)