Chapter 4. Preparing Red Hat OpenShift Container Platform for Red Hat OpenStack Services on OpenShift


You install Red Hat OpenStack Services on OpenShift (RHOSO) on an operational Red Hat OpenShift Container Platform (RHOCP) cluster. To prepare for installing and deploying your RHOSO environment, you must configure the RHOCP worker nodes and the RHOCP networks on your RHOCP cluster.

Red Hat OpenStack Services on OpenShift (RHOSO) services run on Red Hat OpenShift Container Platform (RHOCP) worker nodes. By default, the OpenStack Operator deploys RHOSO services on any worker node. You can use node labels in your OpenStackControlPlane custom resource (CR) to specify which RHOCP nodes host the RHOSO services. By pinning some services to specific infrastructure nodes rather than running the services on all of your RHOCP worker nodes, you optimize the performance of your deployment. You can create labels for the RHOCP nodes, or you can use the existing labels, and then specify those labels in the OpenStackControlPlane CR by using the nodeSelector field.

For example, the Block Storage service (cinder) has different requirements for each of its services:

  • The cinder-scheduler service is a very light service with low memory, disk, network, and CPU usage.
  • The cinder-api service has high network usage due to resource listing requests.
  • The cinder-volume service has high disk and network usage because many of its operations are in the data path, such as offline volume migration, and creating a volume from an image.
  • The cinder-backup service has high memory, network, and CPU requirements.

Therefore, you can pin the cinder-api, cinder-volume, and cinder-backup services to dedicated nodes and let the OpenStack Operator place the cinder-scheduler service on a node that has capacity.

Tip

Alternatively, you can create Topology CRs and use the topologyRef field in your OpenStackControlPlane CR to control service pod placement after your RHOCP cluster has been prepared. For more information, see Controlling service pod placement with Topology CRs.

4.2. Creating a storage class

You must create a storage class for your Red Hat OpenShift Container Platform (RHOCP) cluster storage back end to provide persistent volumes to Red Hat OpenStack Services on OpenShift (RHOSO) pods. If you do not have an existing storage class that can provide persistent volumes, you can use the Logical Volume Manager Storage Operator to provide a storage class for RHOSO. You specify this storage class as the cluster storage back end for the RHOSO control plane deployment. Use a storage back end based on SSD or NVMe drives for the storage class. For more information about Logical Volume Manager Storage, see Persistent storage using Logical Volume Manager Storage.

If you are using LVM, you must wait until the LVM Storage Operator announces that the storage is available before creating the control plane. The LVM Storage Operator announces that the cluster and LVMS storage configuration is complete through the annotation for the volume group to the worker node object. If you deploy pods before all the control plane nodes are ready, then multiple PVCs and pods are scheduled on the same nodes.

To check that the storage is ready, you can query the nodes in your lvmclusters.lvm.topolvm.io object. For example, run the following command if you have three worker nodes and your device class for the LVM Storage Operator is named "local-storage":

# oc get node -l "topology.topolvm.io/node in ($(oc get nodes -l node-role.kubernetes.io/worker -o name | cut -d '/' -f 2 | tr '\n' ',' | sed 's/.\{1\}$//'))" -o=jsonpath='{.items[*].metadata.annotations.capacity\.topolvm\.io/local-storage}' | tr ' ' '\n'
Copy to Clipboard Toggle word wrap

The storage is ready when this command returns three non-zero values.

4.3. Creating the openstack namespace

You must create a namespace within your Red Hat OpenShift Container Platform (RHOCP) environment for the service pods of your Red Hat OpenStack Services on OpenShift (RHOSO) deployment. The service pods of each RHOSO deployment exist in their own namespace within the RHOCP environment.

Prerequisites

  • You are logged on to a workstation that has access to the RHOCP cluster, as a user with cluster-admin privileges.

Procedure

  1. Create the openstack project for the deployed RHOSO environment:

    $ oc new-project openstack
    Copy to Clipboard Toggle word wrap
  2. Ensure the openstack namespace is labeled to enable privileged pod creation by the OpenStack Operators:

    $ oc get namespace openstack -ojsonpath='{.metadata.labels}' | jq
    {
      "kubernetes.io/metadata.name": "openstack",
      "pod-security.kubernetes.io/enforce": "privileged",
      "security.openshift.io/scc.podSecurityLabelSync": "false"
    }
    Copy to Clipboard Toggle word wrap

    If the security context constraint (SCC) is not "privileged", use the following commands to change it:

    $ oc label ns openstack security.openshift.io/scc.podSecurityLabelSync=false --overwrite
    $ oc label ns openstack pod-security.kubernetes.io/enforce=privileged --overwrite
    Copy to Clipboard Toggle word wrap
  3. Optional: To remove the need to specify the namespace when executing commands on the openstack namespace, set the default namespace to openstack:

    $ oc project openstack
    Copy to Clipboard Toggle word wrap

You must create a Secret custom resource (CR) to provide secure access to the Red Hat OpenStack Services on OpenShift (RHOSO) service pods. The following procedure creates a Secret CR with the required password formats for each service.

For an example Secret CR that generates the required passwords and fernet key for you, see Example Secret CR for secure access to the RHOSO service pods.

Warning

You cannot change a service password once the control plane is deployed. If a service password is changed in osp-secret after deploying the control plane, the service is reconfigured to use the new password but the password is not updated in the Identity service (keystone). This results in a service outage.

Prerequisites

  • You have installed python3-cryptography.

Procedure

  1. Create a Secret CR on your workstation, for example, openstack_service_secret.yaml.
  2. Add the following initial configuration to openstack_service_secret.yaml:

    apiVersion: v1
    data:
      AdminPassword: <base64_password>
      AodhPassword: <base64_password>
      AodhDatabasePassword: <base64_password>
      BarbicanDatabasePassword: <base64_password>
      BarbicanPassword: <base64_password>
      BarbicanSimpleCryptoKEK: <base64_fernet_key>
      CeilometerPassword: <base64_password>
      CinderDatabasePassword: <base64_password>
      CinderPassword: <base64_password>
      DatabasePassword: <base64_password>
      DbRootPassword: <base64_password>
      DesignateDatabasePassword: <base64_password>
      DesignatePassword: <base64_password>
      GlanceDatabasePassword: <base64_password>
      GlancePassword: <base64_password>
      HeatAuthEncryptionKey: <base64_password>
      HeatDatabasePassword: <base64_password>
      HeatPassword: <base64_password>
      IronicDatabasePassword: <base64_password>
      IronicInspectorDatabasePassword: <base64_password>
      IronicInspectorPassword: <base64_password>
      IronicPassword: <base64_password>
      KeystoneDatabasePassword: <base64_password>
      ManilaDatabasePassword: <base64_password>
      ManilaPassword: <base64_password>
      MetadataSecret: <base64_password>
      NeutronDatabasePassword: <base64_password>
      NeutronPassword: <base64_password>
      NovaAPIDatabasePassword: <base64_password>
      NovaAPIMessageBusPassword: <base64_password>
      NovaCell0DatabasePassword: <base64_password>
      NovaCell0MessageBusPassword: <base64_password>
      NovaCell1DatabasePassword: <base64_password>
      NovaCell1MessageBusPassword: <base64_password>
      NovaPassword: <base64_password>
      OctaviaDatabasePassword: <base64_password>
      OctaviaPassword: <base64_password>
      PlacementDatabasePassword: <base64_password>
      PlacementPassword: <base64_password>
      SwiftPassword: <base64_password>
    kind: Secret
    metadata:
      name: osp-secret
      namespace: openstack
    type: Opaque
    Copy to Clipboard Toggle word wrap
    • Replace <base64_password> with a 32-character key that is base64 encoded.

      Note

      The HeatAuthEncryptionKey password must be a 32-character key for Orchestration service (heat) encryption. If you increase the length of the passwords for all other services, ensure that the HeatAuthEncryptionKey password remains at length 32.

      You can use the following command to manually generate a base64 encoded password:

      $ echo -n <password> | base64
      Copy to Clipboard Toggle word wrap

      Alternatively, if you are using a Linux workstation and you are generating the Secret CR by using a Bash command such as cat, you can replace <base64_password> with the following command to auto-generate random passwords for each service:

      $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
      Copy to Clipboard Toggle word wrap
    • Replace the <base64_fernet_key> with a base64 encoded fernet key. You can use the following command to manually generate it:

      $(python3 -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode('UTF-8'))" | base64)
      Copy to Clipboard Toggle word wrap
  3. Create the Secret CR in the cluster:

    $ oc create -f openstack_service_secret.yaml -n openstack
    Copy to Clipboard Toggle word wrap
  4. Verify that the Secret CR is created:

    $ oc describe secret osp-secret -n openstack
    Copy to Clipboard Toggle word wrap

You must create a Secret custom resource (CR) file to provide secure access to the Red Hat OpenStack Services on OpenShift (RHOSO) service pods.

If you are using a Linux workstation, you can create a Secret CR file called openstack_service_secret.yaml by using the following Bash cat command that generates the required passwords and fernet key for you:

$ cat <<EOF > openstack_service_secret.yaml
apiVersion: v1
data:
  AdminPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  AodhPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  AodhDatabasePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  BarbicanDatabasePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  BarbicanPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  BarbicanSimpleCryptoKEK: $(python3 -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode('UTF-8'))" | base64)
  CeilometerPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  CinderDatabasePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  CinderPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  DatabasePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  DbRootPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  DesignateDatabasePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  DesignatePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  GlanceDatabasePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  GlancePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  HeatAuthEncryptionKey: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  HeatDatabasePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  HeatPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  IronicDatabasePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  IronicInspectorDatabasePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  IronicInspectorPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  IronicPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  KeystoneDatabasePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  ManilaDatabasePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  ManilaPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  MetadataSecret: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  NeutronDatabasePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  NeutronPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  NovaAPIDatabasePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  NovaAPIMessageBusPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  NovaCell0DatabasePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  NovaCell0MessageBusPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  NovaCell1DatabasePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  NovaCell1MessageBusPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  NovaPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  OctaviaDatabasePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  OctaviaPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  PlacementDatabasePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  PlacementPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  SwiftPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
kind: Secret
metadata:
  name: osp-secret
  namespace: openstack
type: Opaque
EOF
Copy to Clipboard Toggle word wrap
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat