Chapter 2. Preparing Red Hat OpenShift Container Platform for Red Hat OpenStack Services on OpenShift
You install Red Hat OpenStack Services on OpenShift (RHOSO) on an operational Red Hat OpenShift Container Platform (RHOCP) cluster. To prepare for installing and deploying your RHOSO environment, you must configure the RHOCP worker nodes and the RHOCP networks on your RHOCP cluster.
2.1. Configuring Red Hat OpenShift Container Platform nodes for a Red Hat OpenStack Platform deployment
Red Hat OpenStack Services on OpenShift (RHOSO) services run on Red Hat OpenShift Container Platform (RHOCP) worker nodes. By default, the OpenStack Operator deploys RHOSO services on any worker node. You can use node labels in your OpenStackControlPlane
custom resource (CR) to specify which RHOCP nodes host the RHOSO services. By pinning some services to specific infrastructure nodes rather than running the services on all of your RHOCP worker nodes, you optimize the performance of your deployment. You can create labels for the RHOCP nodes, or you can use the existing labels, and then specify those labels in the OpenStackControlPlane
CR by using the nodeSelector
field.
For example, the Block Storage service (cinder) has different requirements for each of its services:
-
The
cinder-scheduler
service is a very light service with low memory, disk, network, and CPU usage. -
The
cinder-api
service has high network usage due to resource listing requests. -
The
cinder-volume
service has high disk and network usage because many of its operations are in the data path, such as offline volume migration, and creating a volume from an image. -
The
cinder-backup
service has high memory, network, and CPU requirements.
Therefore, you can pin the cinder-api
, cinder-volume
, and cinder-backup
services to dedicated nodes and let the OpenStack Operator place the cinder-scheduler
service on a node that has capacity.
2.2. Creating a storage class
You must create a storage class for your Red Hat OpenShift Container Platform (RHOCP) cluster storage back end, to provide persistent volumes to Red Hat OpenStack Services on OpenShift (RHOSO) pods. Red Hat recommends that you use the Logical Volume Manager (LVM) Storage storage class with RHOSO, although you can use other implementations, such as Container Storage Interface (CSI) or OpenShift Data Foundation (ODF). You specify this storage class as the cluster storage back end for the RHOSO deployment. Red Hat recommends that you use a storage back end based on SSD or NVMe drives for the storage class.
You must wait until the LVM Storage Operator announces that the storage is available before creating the control plane. The LVM Storage Operator announces that the cluster and LVMS storage configuration is complete through the annotation for the volume group to the worker node object. If you deploy pods before all the control plane nodes are ready, then multiple PVCs and pods are scheduled on the same nodes.
To check that the storage is ready, you can query the nodes in your lvmclusters.lvm.topolvm.io
object. For example, run the following command if you have three worker nodes and your device class for the LVM Storage Operator is named "local-storage":
# oc get node -l "topology.topolvm.io/node in ($(oc get nodes -l node-role.kubernetes.io/worker -o name | cut -d '/' -f 2 | tr '\n' ',' | sed 's/.\{1\}$//'))" -o=jsonpath='{.items[*].metadata.annotations.capacity\.topolvm\.io/local-storage}' | tr ' ' '\n'
The storage is ready when this command returns three non-zero values
For more information about how to configure the LVM Storage storage class, see Persistent storage using Logical Volume Manager Storage in the RHOCP Storage guide.
2.3. Creating the openstack
namespace
You must create a namespace within your Red Hat OpenShift Container Platform (RHOCP) environment for the service pods of your Red Hat OpenStack Services on OpenShift (RHOSO) deployment. The service pods of each RHOSO deployment exist in their own namespace within the RHOCP environment.
Prerequisites
-
You are logged on to a workstation that has access to the RHOCP cluster, as a user with
cluster-admin
privileges.
Procedure
Create the
openstack
project for the deployed RHOSO environment:$ oc new-project openstack
Ensure the
openstack
namespace is labeled to enable privileged pod creation by the OpenStack Operators:$ oc get namespace openstack -ojsonpath='{.metadata.labels}' | jq { "kubernetes.io/metadata.name": "openstack", "pod-security.kubernetes.io/enforce": "privileged", "security.openshift.io/scc.podSecurityLabelSync": "false" }
If the security context constraint (SCC) is not "privileged", use the following commands to change it:
$ oc label ns openstack security.openshift.io/scc.podSecurityLabelSync=false --overwrite $ oc label ns openstack pod-security.kubernetes.io/enforce=privileged --overwrite
Optional: To remove the need to specify the namespace when executing commands on the
openstack
namespace, set the defaultnamespace
toopenstack
:$ oc project openstack
2.4. Providing secure access to the Red Hat OpenStack Services on OpenShift services
You must create a Secret
custom resource (CR) to provide secure access to the Red Hat OpenStack Services on OpenShift (RHOSO) service pods.
You cannot change a service password once the control plane is deployed. If a service password is changed in osp-secret
after deploying the control plane, the service is reconfigured to use the new password but the password is not updated in the Identity service (keystone). This results in a service outage.
Procedure
-
Create a
Secret
CR file on your workstation, for example,openstack_service_secret.yaml
. Add the following initial configuration to
openstack_service_secret.yaml
:apiVersion: v1 data: AdminPassword: <base64_password> AodhPassword: <base64_password> AodhDatabasePassword: <base64_password> BarbicanDatabasePassword: <base64_password> BarbicanPassword: <base64_password> BarbicanSimpleCryptoKEK: <base64_fernet_key> CeilometerPassword: <base64_password> CinderDatabasePassword: <base64_password> CinderPassword: <base64_password> DatabasePassword: <base64_password> DbRootPassword: <base64_password> DesignateDatabasePassword: <base64_password> DesignatePassword: <base64_password> GlanceDatabasePassword: <base64_password> GlancePassword: <base64_password> HeatAuthEncryptionKey: <base64_password> HeatDatabasePassword: <base64_password> HeatPassword: <base64_password> IronicDatabasePassword: <base64_password> IronicInspectorDatabasePassword: <base64_password> IronicInspectorPassword: <base64_password> IronicPassword: <base64_password> KeystoneDatabasePassword: <base64_password> ManilaDatabasePassword: <base64_password> ManilaPassword: <base64_password> MetadataSecret: <base64_password> NeutronDatabasePassword: <base64_password> NeutronPassword: <base64_password> NovaAPIDatabasePassword: <base64_password> NovaAPIMessageBusPassword: <base64_password> NovaCell0DatabasePassword: <base64_password> NovaCell0MessageBusPassword: <base64_password> NovaCell1DatabasePassword: <base64_password> NovaCell1MessageBusPassword: <base64_password> NovaPassword: <base64_password> OctaviaDatabasePassword: <base64_password> OctaviaPassword: <base64_password> PlacementDatabasePassword: <base64_password> PlacementPassword: <base64_password> SwiftPassword: <base64_password> kind: Secret metadata: name: osp-secret namespace: openstack type: Opaque
Replace
<base64_password>
with a 32-character key that is base64 encoded. You can use the following command to manually generate a base64 encoded password:$ echo -n <password> | base64
Alternatively, if you are using a Linux workstation and you are generating the
Secret
CR definition file by using a Bash command such ascat
, you can replace<base64_password>
with the following command to auto-generate random passwords for each service:$(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
Replace the
<base64_fernet_key>
with a fernet key that is base64 encoded. You can use the following command to manually generate the fernet key:python3 -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode('UTF-8'))" | base64
NoteThe
HeatAuthEncryptionKey
password must be a 32-character key for Orchestration service (heat) encryption. If you increase the length of the passwords for all other services, ensure that theHeatAuthEncryptionKey
password remains at length 32.Create the
Secret
CR in the cluster:$ oc create -f openstack_service_secret.yaml -n openstack
Verify that the
Secret
CR is created:$ oc describe secret osp-secret -n openstack