Chapter 1. Red Hat OpenStack Services on OpenShift 18.0 adoption overview


Adoption is the process of migrating a Red Hat OpenStack Platform (RHOSP) 17.1 control plane to Red Hat OpenStack Services on OpenShift 18.0, and then completing an in-place upgrade of the data plane. You can retain existing infrastructure investments and modernize your RHOSP deployment on a containerized Red Hat OpenShift Container Platform (RHOCP) foundation. To ensure that you understand the entire adoption process and how to sufficiently prepare your RHOSP environment, review the prerequisites, adoption process, and post-adoption tasks.

Important

Read the whole adoption guide before you start the adoption to ensure that you understand the procedure. Prepare the necessary configuration snippets for each RHOSP service in advance, and test the migration in a representative test environment before you apply it to production.

1.1. Adoption limitations

Before you proceed with the adoption, check which features are Technology Previews or unsupported.

Technology Preview

The following features are Technology Previews and have not been tested within the context of the Red Hat OpenStack Services on OpenShift (RHOSO) adoption:

  • Key Manager service (barbican) adoption with Proteccio hardware security module (HSM) integration
  • DNS-as-a-service (designate)

    The following Compute service (nova) features are Technology Previews:

  • NUMA-aware vswitches
  • PCI passthrough by flavor
  • SR-IOV trusted virtual functions
  • vGPU
  • Emulated virtual Trusted Platform Module (vTPM)
  • UEFI
  • AMD SEV
  • Direct download from Rados Block Device (RBD)
  • File-backed memory
  • Defining a custom inventory of resources in a YAML file, provider.yaml
Unsupported features

The adoption process does not support the following features:

  • Adopting Border Gateway Protocol (BGP) environments to the RHOSO data plane
  • Adopting a Federal Information Processing Standards (FIPS) environment

1.2. Adoption prerequisites

Before you begin the adoption procedure, complete the following prerequisites:

Planning information
Back-up information
Compute
ML2/OVS
  • If you use the Modular Layer 2 plug-in with Open vSwitch mechanism driver (ML2/OVS), migrate it to the Modular Layer 2 plug-in with Open Virtual Networking (ML2/OVN) mechanism driver. For more information, see Migrating to the OVN mechanism driver.
Tools
  • The oc and podman command line tools are installed on your workstation.
  • Make sure to set the correct RHOSO project namespace in which to run commands.

    $ oc project openstack
RHOSP 17.1 release
RHOSP 17.1 hosts
  • All control plane and data plane hosts of the RHOSP 17.1 cloud are up and running, and continue to run throughout the adoption procedure.

1.3. Guidelines for planning the adoption

When planning to adopt a Red Hat OpenStack Services on OpenShift (RHOSO) 18.0 environment, consider the scope of the change. An adoption is similar in scope to a data center upgrade. Different firmware levels, hardware vendors, hardware profiles, networking interfaces, storage interfaces, and so on affect the adoption process and can cause changes in behavior during the adoption.

Review the following guidelines to adequately plan for the adoption and increase the chance that you complete the adoption successfully:

Important

All commands in the adoption documentation are examples. Do not copy and paste the commands without understanding what the commands do.

  • To minimize the risk of an adoption failure, reduce the number of environmental differences between the staging environment and the production sites.
  • If the staging environment is not representative of the production sites or if a staging environment is not available, you must plan to include contingency time in case the adoption fails.
  • Review your custom Red Hat OpenStack Platform (RHOSP) service configuration at every major release.

    • Every major release upgrades through multiple OpenStack releases.
    • Each major release might deprecate configuration options or change the format of the configuration.
  • Prepare a Method of Procedure (MOP) that is specific to your environment to reduce the risk of variance or omitted steps when running the adoption process.
  • You can use representative hardware in a staging environment to prepare a MOP and validate any content changes.

    • Include a cross-section of firmware versions, additional interface or device hardware, and any additional software in the representative staging environment to ensure that it is broadly representative of the variety that is present in the production environments.
    • Ensure that you validate any Red Hat Enterprise Linux update or upgrade in the representative staging environment.
  • Use Satellite for localized and version-pinned RPM content where your data plane nodes are located.
  • In the production environment, use the content that you tested in the staging environment.

1.4. Adoption process overview

Familiarize yourself with the steps of the adoption process.

Main adoption process
Distributed Compute Node (DCN) architecture process
Post-adoption tasks
  • For more details on the tasks you must perform after completing the adoption, see Post-adoption tasks.

1.5. Adoption duration and impact

The durations in the following table were recorded in a test environment that consisted of 228 Compute nodes and 3 Networker nodes. To accurately estimate the adoption duration for each task, perform these procedures in a test environment with hardware that is similar to your production environment. Ensure that you set up the Red Hat OpenShift Container Platform (RHOCP) (RHOCP) environment and install the Operators before testing.

Important

Durations can vary significantly based on the content of your environment, for example, the size of your service databases or the number of services. The durations represent raw execution time. They do not include human operator activity.

Expand
Table 1.1. Duration and impact of adoption stages
Adoption stageDurationNotes

TLS-e migration

  • 3 seconds
  • No impact on running workloads.

Database migration and back-end services deployment

  • 12 minutes
  • No data loss or workload disruption.
  • APIs are offline.

Control plane adoption

  • 21 minutes
  • No data loss or workload disruption.
  • APIs are being started.
  • No Compute hosts are available to schedule workloads until the data plane adoption stage.
  • No network changes should be done until the data plane adoption is complete.

Data plane adoption

  • 25 minutes pilot node set, plus 60 minutes remaining node sets
  • APIs are online.
  • As Compute hosts are adopted, they become available for workload scheduling.
  • No data loss or workload disruption.

Fast-forward upgrade of Compute services

  • 8 minutes
  • No data loss or workload disruption.

Networker node adoption

  • 9 minutes
  • See the following "Data plane connectivity impact" table for more details about Networker scenarios.
Expand
Table 1.2. Data plane connectivity impact
ScenarioNotes

Migrate a 17.1 OVN gateway on the control plane to a RHOCP-hosted OVN gateway

Possible L3 downtime due to the migration of the traffic path to new hosts

Migrate a 17.1 OVN gateway on the control plane to an 18.0 data plane Networker node

No L2/L3 data plane connectivity loss because the traffic path remains unchanged

Migrate a 17.1 OVN gateway on a Networker node to a 18.0 data plane Networker node

No L2/L3 data plane connectivity loss because the traffic path remains unchanged

L3 handled through provider networks

No L2/L3 data plane connectivity loss because the traffic path remains unchanged

1.6. Overview of Distributed Compute Node adoption

The process to adopt Distributed Compute Node (DCN) deployment from Red Hat OpenStack Platform (RHOSP) to Red Hat OpenStack Services on OpenShift (RHOSO) requires additional adoption tasks:

  • You must map a multi-stack deployment to multiple node sets.
  • You must map additional networking configurations.

    Multi-stack to multi-node set mapping

    In director deployments, DCN environments use multiple Heat stacks:

    • The Central stack is templating for Controllers and central Compute nodes.
    • An edge stack is templating for Edge Compute nodes in a stack. There is one stack per DCN site.

      When you perform an adoption, map director stacks to OpenStackDataPlaneNodeSet custom resources (CRs):

      Expand
      Table 1.3. Mapping director stacks to RHOSO nodesets
      director stackRHOSO nodesetAvailability zone

      Central stack (Compute role)

      openstack-edpm or openstack-cell1

      az-central

      DCN1 stack (ComputeDcn1 role)

      openstack-edpm-dcn1 or openstack-cell1-dcn1

      az-dcn1

      DCN2 stack (ComputeDcn2 role)

      openstack-edpm-dcn2 or openstack-cell1-dcn2

      az-dcn2

      Note

      Keep all node sets in the same Nova cell to maintain unified scheduling through a shared cell. The default cell is cell1.

    Key differences from standard adoption

    The following table summarizes the differences between standard adoption and DCN adoption:

    Expand
    Table 1.4. Comparison of standard and DCN adoption
    AspectStandard adoptionDCN adoption

    Director stacks

    Single stack

    Multiple stacks (central + edge sites)

    Network topology

    Flat L2 networks

    Routed L3 networks with multiple subnets

    Data plane node sets

    Single node set

    Multiple node sets (one per site minimum)

    Network routes

    Usually not required

    Required for inter-site connectivity

    Physnets

    Single physnet (e.g., datacentre)

    Multiple physnets (e.g., leaf0, leaf1, leaf2)

    Availability zones

    Often single AZ

    Multiple AZs (one per site)

    OVN bridge mappings

    Single mapping

    Site-specific mappings

    Provider networks

    Single segment

    Multi-segment routed provider networks

    Requirements for DCN adoption

    Before adopting a DCN deployment, ensure you have:

    • Network topology information for all sites (IP ranges, VLANs, gateways)
    • Inter-site routing configuration (routes between site subnets)
    • Mapping of director roles to availability zones
    • OVN bridge mapping configuration for each site
Important

The adoption of the control plane must complete before adopting any data plane nodes. However, once the control plane is adopted, the edge site data plane adoptions can proceed in parallel with the central site data plane adoption.

DCN Adoption workflow overview

The adoption of a Distributed Compute Node (DCN) deployment from Red Hat OpenStack Platform (RHOSP) to Red Hat OpenStack Services on OpenShift (RHOSO)

  1. Control plane adoption: Adopt all control plane services from the central director stack to the RHOSO control plane. This is identical to standard adoption.
  2. Network configuration: Configure multi-subnet NetConfig and NetworkAttachmentDefinition CRs to support all site networks.
  3. Data plane node set creation: Create separate OpenStackDataPlaneNodeSet CRs for each site, each with site-specific network configurations:

    • Network subnet references
    • OVN bridge mappings (physnets)
    • Inter-site routing configuration
  4. Data plane deployment: Deploy all node sets. The edge site node sets can be deployed in parallel after the central site control plane is adopted.

Before you adopt the Red Hat OpenStack Services on OpenShift (RHOSO) data plane, you must verify that the systemd-container package is installed and that systemd-machined is running on all the Compute hosts. You must install the systemd-container package on each Compute host that does not have this package.

Procedure

  1. Log in to the Compute node host as a user with the appropriate permissions.
  2. List the instances that are running on the host:

    $ sudo machinectl list
    Sample output
    MACHINE                  CLASS SERVICE      OS VERSION ADDRESSES
    qemu-1-instance-000000b9 vm    libvirt-qemu -  -       -
    qemu-2-instance-000000c2 vm    libvirt-qemu -  -       -
    
    2 machines listed.
  3. Verify that the systemd-machined service is running:

    $ sudo systemctl status systemd-machined.service
    Sample output
    systemd-machined.service - Virtual Machine and Container Registration Service
         Loaded: loaded (/usr/lib/systemd/system/systemd-machined.service; static)
         Active: active (running) since Mon 2025-06-16 11:42:07 EDT; 2min 48s ago
           Docs: man:systemd-machined.service(8)
                 man:org.freedesktop.machine1(5)
       Main PID: 136614 (systemd-machine)
         Status: "Processing requests..."
          Tasks: 1 (limit: 838860)
         Memory: 1.4M
            CPU: 33ms
         CGroup: /system.slice/systemd-machined.service
                 └─136614 /usr/lib/systemd/systemd-machined
    
    Jun 16 11:42:07 computehost001 systemd[1]: Starting Virtual Machine and Container Registration Service...
    Jun 16 11:42:07 computehost001 systemd[1]: Started Virtual Machine and Container Registration Service.
    Jun 16 11:43:44 computehost001 systemd-machined[136614]: New machine qemu-1-instance-000000b9.
    Jun 16 11:43:51 computehost001 systemd-machined[136614]: New machine qemu-2-instance-000000c2.
    Important

    If the systemd-machined service is running, skip the rest of this procedure. Ensure that you verify that the systemd-machined service is running each Compute node host in the cluster.

  4. If the systemd-machined service is not running, before you can install the systemd-container package, live migrate all virtual machines from the host. For more information about live migration, see Rebooting Compute nodes in Performing a minor update of Red Hat OpenStack Platform.
  5. Install the systemd-container on the host:

    • If you upgraded your environment from an earlier version of Red Hat OpenStack Platform, reboot the Compute host to automatically install the systemd-container.
    • If you deployed a new RHOSO environment, install the systemd-container manually by using the following command. Rebooting the Compute host is not required:

      $ sudo dnf -y install systemd-container
      Note

      If your Compute host is not running a virtual machine, you can install the systemd-container automatically or manually.

  6. Repeat this procedure on each Compute host in the cluster where the systemd-machined service is not running.

1.8. Identity service authentication

If you have custom policies enabled, complete the following steps for adoption:

  1. Remove custom policies.
  2. Run the adoption.
  3. Re-add custom policies by using the new SRBAC syntax.
Important

Red Hat does not support customized roles or policies. Syntax errors or misapplied authorization can negatively impact security or usability. If you need customized roles or policies in your production environment, contact Red Hat support for a support exception before you begin the adoption.

After you adopt a director-based OpenStack deployment to a Red Hat OpenStack Services on OpenShift deployment, the Identity service performs user authentication and authorization by using Secure RBAC (SRBAC). If SRBAC is already enabled, then there is no change to how you perform operations. If SRBAC is disabled, then adopting a director-based OpenStack deployment might change how you perform operations due to changes in API access policies.

When you adopt a new Red Hat OpenStack Services on OpenShift (RHOSO) deployment, you must align the network configuration with the adopted cluster to maintain connectivity for existing workloads.

Perform the following tasks to incorporate the existing network configuration:

  • Configure Red Hat OpenShift Container Platform (RHOCP) worker nodes to align VLAN tags and IP Address Management (IPAM) configuration with the existing deployment.
  • Configure control plane services to use compatible IP ranges for service and load-balancing IP addresses.
  • Configure data plane nodes to use corresponding compatible configuration for VLAN tags and IPAM.

When configuring nodes and services, the general approach is as follows:

  • For IPAM, you can either reuse subnet ranges from the existing deployment or, if there is a shortage of free IP addresses in existing subnets, define new ranges for the new control plane services. If you define new ranges, you configure IP routing between the old and new ranges.
  • For VLAN tags, always reuse the configuration from the existing deployment.

You must determine which isolated networks are defined in your existing deployment. After you retrieve your network configuration, you have the following information:

  • A list of isolated networks that are used in the existing deployment.
  • For each of the isolated networks, the VLAN tag and IP ranges used for dynamic address allocation.
  • A list of existing IP address allocations that are used in the environment. When reusing the existing subnet ranges to host the new control plane services, these addresses are excluded from the corresponding allocation pools.

Procedure

  1. Find the network configuration in the network_data.yaml file. For example:

    - name: InternalApi
      mtu: 1500
      vip: true
      vlan: 20
      name_lower: internal_api
      dns_domain: internal.mydomain.tld.
      service_net_map_replace: internal
      subnets:
        internal_api_subnet:
          ip_subnet: '172.17.0.0/24'
          allocation_pools: [{'start': '172.17.0.4', 'end': '172.17.0.250'}]
  2. Retrieve the VLAN tag that is used in the vlan key and the IP range in the ip_subnet key for each isolated network from the network_data.yaml file. When reusing subnet ranges from the existing deployment for the new control plane services, the ranges are split into separate pools for control plane services and load-balancer IP addresses.
  3. Use the tripleo-ansible-inventory.yaml file to determine the list of IP addresses that are already consumed in the adopted environment. For each listed host in the file, make a note of the IP and VIP addresses that are consumed by the node. For example:

    Standalone:
      hosts:
        standalone:
          ...
          internal_api_ip: 172.17.0.100
        ...
      ...
    standalone:
      children:
        Standalone: {}
      vars:
        ...
        internal_api_vip: 172.17.0.2
        ...
    Note

    In this example, the 172.17.0.2 and 172.17.0.100 values are consumed and are not available for the new control plane services until the adoption is complete.

  4. Repeat this procedure for each isolated network and each host in the configuration.

1.9.2. Planning your IPAM configuration

In a Red Hat OpenStack Services on OpenShift (RHOSO) deployment, each service that is deployed on the Red Hat OpenShift Container Platform (RHOCP) worker nodes requires an IP address from the IP Address Management (IPAM) pool. In a Red Hat OpenStack Platform (RHOSP) deployment, all services that are hosted on a Controller node share the same IP address.

The RHOSO control plane has different requirements for the number of IP addresses that are made available for services. Depending on the size of the IP ranges that are used in the existing RHOSO deployment, you might reuse these ranges for the RHOSO control plane.

The total number of IP addresses that are required for the new control plane services in each isolated network is calculated as the sum of the following:

  • The number of RHOCP worker nodes. Each worker node requires 1 IP address in the NodeNetworkConfigurationPolicy custom resource (CR).
  • The number of IP addresses required for the data plane nodes. Each node requires an IP address from the NetConfig CRs.
  • The number of IP addresses required for control plane services. Each service requires an IP address from the NetworkAttachmentDefinition CRs. This number depends on the number of replicas for each service.
  • The number of IP addresses required for load balancer IP addresses. Each service requires a Virtual IP address from the IPAddressPool CRs.

For example, a simple single worker node RHOCP deployment with Red Hat OpenShift Local has the following IP ranges defined for the internalapi network:

  • 1 IP address for the single worker node
  • 1 IP address for the data plane node
  • NetworkAttachmentDefinition CRs for control plane services: X.X.X.30-X.X.X.70 (41 addresses)
  • IPAllocationPool CRs for load balancer IPs: X.X.X.80-X.X.X.90 (11 addresses)

This example shows a total of 54 IP addresses allocated to the internalapi allocation pools.

The requirements might differ depending on the list of RHOSP services to be deployed, their replica numbers, and the number of RHOCP worker nodes and data plane nodes.

Additional IP addresses might be required in future RHOSP releases, so you must plan for some extra capacity for each of the allocation pools that are used in the new environment.

After you determine the required IP pool size for the new deployment, you can choose to define new IP address ranges or reuse your existing IP address ranges. Regardless of the scenario, the VLAN tags in the existing deployment are reused in the new deployment. Ensure that the VLAN tags are properly retained in the new configuration.

1.9.2.1. Configuring new subnet ranges

Note

If you are using IPv6, you can reuse existing subnet ranges in most cases. For more information about existing subnet ranges, see Reusing existing subnet ranges.

You can define new IP ranges for control plane services that belong to a different subnet that is not used in the existing cluster. Then you configure link local IP routing between the existing and new subnets to enable existing and new service deployments to communicate. This involves using the director mechanism on a pre-adopted cluster to configure additional link local routes. This enables the data plane deployment to reach out to Red Hat OpenStack Platform (RHOSP) nodes by using the existing subnet addresses. You can use new subnet ranges with any existing subnet configuration, and when the existing cluster subnet ranges do not have enough free IP addresses for the new control plane services.

You must size the new subnet appropriately to accommodate the new control plane services. There are no specific requirements for the existing deployment allocation pools that are already consumed by the RHOSP environment.

Important

Defining a new subnet for Storage and Storage management is not supported because Compute service (nova) and Red Hat Ceph Storage do not allow modifying those networks during adoption.

In the following procedure, you configure NetworkAttachmentDefinition custom resources (CRs) to use a different subnet from what is configured in the network_config section of the OpenStackDataPlaneNodeSet CR for the same networks. The new range in the NetworkAttachmentDefinition CR is used for control plane services, while the existing range in the OpenStackDataPlaneNodeSet CR is used to manage IP Address Management (IPAM) for data plane nodes.

The values that are used in the following procedure are examples. Use values that are specific to your configuration.

Procedure

  1. Configure link local routes on the existing deployment nodes for the control plane subnets. This is done through director configuration:

    network_config:
      - type: ovs_bridge
        name: br-ctlplane
        routes:
        - ip_netmask: 0.0.0.0/0
          next_hop: 192.168.1.1
        - ip_netmask: 172.31.0.0/24
          next_hop: 192.168.1.100
    • ip_netmask defines the new control plane subnet.
    • next_hop defines the control plane IP address of the existing data plane node.

      Repeat this configuration for other networks that need to use different subnets for the new and existing parts of the deployment.

  2. Apply the new configuration to every RHOSP node:

    (undercloud)$ openstack overcloud network provision \
     --output  <deployment_file> \
    [--templates <templates_directory>]/home/stack/templates/<networks_definition_file>
    (undercloud)$ openstack overcloud node provision \
     --stack <stack> \
     --network-config \
     --output <deployment_file> \
    [--templates <templates_directory>]/home/stack/templates/<node_definition_file>
    • Optional: Include the --templates option to use your own templates instead of the default templates located in /usr/share/openstack-tripleo-heat-templates. Replace <templates_directory> with the path to the directory that contains your templates.
    • Replace <stack> with the name of the stack for which the bare-metal nodes are provisioned. If not specified, the default is overcloud.
    • Include the --network-config optional argument to provide the network definitions to the cli-overcloud-node-network-config.yaml Ansible playbook. The cli-overcloud-node-network-config.yaml playbook uses the os-net-config tool to apply the network configuration on the deployed nodes. If you do not use --network-config to provide the network definitions, then you must configure the {{role.name}}NetworkConfigTemplate parameters in your network-environment.yaml file, otherwise the default network definitions are used.
    • Replace <deployment_file> with the name of the heat environment file to generate for inclusion in the deployment command, for example /home/stack/templates/overcloud-baremetal-deployed.yaml.
    • Replace <node_definition_file> with the name of your node definition file, for example, overcloud-baremetal-deploy.yaml. Ensure that the network_config_update variable is set to true in the node definition file.

      Note

      Network configuration changes are not applied by default to avoid the risk of network disruption. You must enforce the changes by setting the StandaloneNetworkConfigUpdate: true in the director configuration files.

  3. Confirm that there are new link local routes to the new subnet on each node. For example:

    # ip route | grep 172
    172.31.0.0/24 via 192.168.122.100 dev br-ctlplane
  4. You also must configure link local routes to existing deployment on Red Hat OpenStack Services on OpenShift (RHOSO) worker nodes. This is achieved by adding routes entries to the NodeNetworkConfigurationPolicy CRs for each network. For example:

      - destination: 192.168.122.0/24
        next-hop-interface: ospbr
    • destination defines the original subnet of the isolated network on the data plane.
    • next-hop-interface defines the Red Hat OpenShift Container Platform (RHOCP) worker network interface that corresponds to the isolated network on the data plane.

      As a result, the following route is added to your RHOCP nodes:

      # ip route | grep 192
      192.168.122.0/24 dev ospbr proto static scope link
  5. Later, during the data plane adoption, in the network_config section of the OpenStackDataPlaneNodeSet CR, add the same link local routes for the new control plane subnet ranges. For example:

      nodeTemplate:
        ansible:
          ansibleUser: root
          ansibleVars:
            additional_ctlplane_host_routes:
            - ip_netmask: 172.31.0.0/24
              next_hop: '{{ ctlplane_ip }}'
            edpm_network_config_template: |
              network_config:
              - type: ovs_bridge
                routes: {{ ctlplane_host_routes + additional_ctlplane_host_routes }}
                ...
  6. List the IP addresses that are used for the data plane nodes in the existing deployment as ansibleHost and fixedIP. For example:

      nodes:
        standalone:
          ansible:
            ansibleHost: 192.168.122.100
            ansibleUser: ""
          hostName: standalone
          networks:
          - defaultRoute: true
            fixedIP: 192.168.122.100
            name: ctlplane
            subnetName: subnet1
    Important

    Do not change RHOSP node IP addresses during the adoption process. List previously used IP addresses in the fixedIP fields for each node entry in the nodes section of the OpenStackDataPlaneNodeSet CR.

  7. Expand the SSH range for the firewall configuration to include both subnets to allow SSH access to data plane nodes from both subnets:

      edpm_sshd_allowed_ranges:
      - 192.168.122.0/24
      - 172.31.0.0/24

    This provides SSH access from the new subnet to the RHOSP nodes as well as the RHOSP subnets.

1.9.2.2. Reusing existing subnet ranges

You can reuse existing subnet ranges if they have enough IP addresses to allocate to the new control plane services. You configure the new control plane services to use the same subnet as you used in the Red Hat OpenStack Platform (RHOSP) environment, and configure the allocation pools that are used by the new services to exclude IP addresses that are already allocated to existing cluster nodes. By reusing existing subnets, you avoid additional link local route configuration between the existing and new subnets.

If your existing subnets do not have enough IP addresses in the existing subnet ranges for the new control plane services, you must create new subnet ranges.

No special routing configuration is required to reuse subnet ranges. However, you must ensure that the IP addresses that are consumed by RHOSP services do not overlap with the new allocation pools configured for Red Hat OpenStack Services on OpenShift control plane services.

If you are especially constrained by the size of the existing subnet, you may have to apply elaborate exclusion rules when defining allocation pools for the new control plane services.

1.9.3. Configuring isolated networks

Before you begin replicating your existing VLAN and IPAM configuration in the Red Hat OpenStack Services on OpenShift (RHOSO) environment, you must have the following IP address allocations for the new control plane services:

  • 1 IP address for each isolated network on each Red Hat OpenShift Container Platform (RHOCP) worker node. You configure these IP addresses in the NodeNetworkConfigurationPolicy custom resources (CRs) for the RHOCP worker nodes.
  • 1 IP range for each isolated network for the data plane nodes. You configure these ranges in the NetConfig CRs for the data plane nodes.
  • 1 IP range for each isolated network for control plane services. These ranges enable pod connectivity for isolated networks in the NetworkAttachmentDefinition CRs.
  • 1 IP range for each isolated network for load balancer IP addresses. These IP ranges define load balancer IP addresses for MetalLB in the IPAddressPool CRs.
Note

The exact list and configuration of isolated networks in the following procedures should reflect the actual Red Hat OpenStack Platform environment. The number of isolated networks might differ from the examples used in the procedures. The IPAM scheme might also differ. Only the parts of the configuration that are relevant to configuring networks are shown. The values that are used in the following procedures are examples. Use values that are specific to your configuration.

To connect service pods to isolated networks on Red Hat OpenShift Container Platform (RHOCP) worker nodes that run Red Hat OpenStack Platform services, physical network configuration on the hypervisor is required.

This configuration is managed by the NMState operator, which uses NodeNetworkConfigurationPolicy custom resources (CRs) to define the desired network configuration for the nodes.

Procedure

  • For each RHOCP worker node, define a NodeNetworkConfigurationPolicy CR that describes the desired network configuration. For example:

    apiVersion: v1
    items:
    - apiVersion: nmstate.io/v1
      kind: NodeNetworkConfigurationPolicy
      spec:
        desiredState:
          interfaces:
          - description: internalapi vlan interface
            ipv4:
              address:
              - ip: 172.17.0.10
                prefix-length: 24
              dhcp: false
              enabled: true
            ipv6:
              enabled: false
            name: enp6s0.20
            state: up
            type: vlan
            vlan:
              base-iface: enp6s0
              id: 20
              reorder-headers: true
          - description: storage vlan interface
            ipv4:
              address:
              - ip: 172.18.0.10
                prefix-length: 24
              dhcp: false
              enabled: true
            ipv6:
              enabled: false
            name: enp6s0.21
            state: up
            type: vlan
            vlan:
              base-iface: enp6s0
              id: 21
              reorder-headers: true
          - description: tenant vlan interface
            ipv4:
              address:
              - ip: 172.19.0.10
                prefix-length: 24
              dhcp: false
              enabled: true
            ipv6:
              enabled: false
            name: enp6s0.22
            state: up
            type: vlan
            vlan:
              base-iface: enp6s0
              id: 22
              reorder-headers: true
        nodeSelector:
          kubernetes.io/hostname: ocp-worker-0
          node-role.kubernetes.io/worker: ""
    Note

    For environments that are enabled with border gateway protocol (BGP), you might need to add additional routes in the NodeNetworkConfigurationPolicy CR so that RHOCP worker nodes can reach the Red Hat OpenStack Platform Controller nodes and Compute nodes over the control plane and internal API networks.

    When you configure the RHOCP worker nodes network in the NodeNetworkConfigurationPolicy CR, add routes for each of the following networks:

    • External network (for example, 172.31.0.0/24)
    • Control plane network (for example, 192.168.188.0/24)
    • BGP main network (for example, 99.99.0.0/16)

    The following example shows the routes.config section from a NodeNetworkConfigurationPolicy CR for a worker node with BGP configured. In this example, 100.64.0.17 and 100.65.0.17 are the IP addresses of the leaf switches that are connected to the specific RHOCP node:

        routes:
          config:
          - destination: 99.99.0.0/16
            next-hop-address: 100.64.0.17
            next-hop-interface: enp7s0
            weight: 200
          - destination: 99.99.0.0/16
            next-hop-address: 100.65.0.17
            next-hop-interface: enp8s0
            weight: 200
          - destination: 172.31.0.0/24
            next-hop-address: 100.64.0.17
            next-hop-interface: enp7s0
            weight: 200
          - destination: 172.31.0.0/24
            next-hop-address: 100.65.0.17
            next-hop-interface: enp8s0
            weight: 200
          - destination: 192.168.188.0/24
            next-hop-address: 100.64.0.17
            next-hop-interface: enp7s0
            weight: 200
          - destination: 192.168.188.0/24
            next-hop-address: 100.65.0.17
            next-hop-interface: enp8s0
            weight: 200

After the NMState operator creates the desired hypervisor network configuration for isolated networks, you must configure the Red Hat OpenStack Platform (RHOSP) services to use the configured interfaces. You define a NetworkAttachmentDefinition custom resource (CR) for each isolated network. In some clusters, these CRs are managed by the Cluster Network Operator, in which case you use Network CRs instead. For more information, see Cluster Network Operator in Networking.

Procedure

  1. Define a NetworkAttachmentDefinition CR for each isolated network. For example:

    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      name: internalapi
    spec:
      config: |
        {
          "cniVersion": "0.3.1",
          "name": "internalapi",
          "type": "macvlan",
          "master": "enp6s0.20",
          "ipam": {
            "type": "whereabouts",
            "range": "172.17.0.0/24",
            "range_start": "172.17.0.20",
            "range_end": "172.17.0.50"
          }
        }
    Important

    Ensure that the interface name and IPAM range match the configuration that you used in the NodeNetworkConfigurationPolicy CRs.

  2. Optional: When reusing existing IP ranges, you can exclude part of the range that is used in the existing deployment by using the exclude parameter in the NetworkAttachmentDefinition pool. For example:

    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      name: internalapi
    spec:
      config: |
        {
          "cniVersion": "0.3.1",
          "name": "internalapi",
          "type": "macvlan",
          "master": "enp6s0.20",
          "ipam": {
            "type": "whereabouts",
            "range": "172.17.0.0/24",
            "range_start": "172.17.0.20",
            "range_end": "172.17.0.50",
            "exclude": [
              "172.17.0.24/32",
              "172.17.0.44/31"
            ]
          }
        }
    • spec.config.ipam.range_start defines the start of the IP range.
    • spec.config.ipam.range_end defines the end of the IP range.
    • spec.config.ipam.exclude excludes part of the IP range. This example excludes IP addresses 172.17.0.24/32 and 172.17.0.44/31 from the allocation pool.
  3. If your RHOSP services require load balancer IP addresses, define the pools for these services in an IPAddressPool CR. For example:

    Note

    The load balancer IP addresses belong to the same IP range as the control plane services, and are managed by MetalLB. This pool should also be aligned with the RHOSP configuration.

    - apiVersion: metallb.io/v1beta1
      kind: IPAddressPool
      spec:
        addresses:
        - 172.17.0.60-172.17.0.70

    Define IPAddressPool CRs for each isolated network that requires load balancer IP addresses.

  4. Optional: When reusing existing IP ranges, you can exclude part of the range by listing multiple entries in the addresses section of the IPAddressPool. For example:

    - apiVersion: metallb.io/v1beta1
      kind: IPAddressPool
      spec:
        addresses:
        - 172.17.0.60-172.17.0.64
        - 172.17.0.66-172.17.0.70

    The example above would exclude the 172.17.0.65 address from the allocation pool.

  5. For environments that are enabled with border gateway protocol (BGP), add routes to the NetworkAttachmentDefinition CRs so that the pods can communicate with the Red Hat OpenStack Platform Controller nodes and Compute nodes over the isolated networks. This is similar to the routes that should be added to the NodeNetworkConfigurationPolicy CRs in BGP environments. For more information about isolated networks, see Configuring isolated networks on RHOCP worker nodes. The following example shows a NetworkAttachmentDefinition CR for the storage network with routes:

    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      name: storage
      namespace: openstack
    spec:
      config: |
        {
          "cniVersion": "0.3.1",
          "name": "storage",
          "type": "bridge",
          "isDefaultGateway": false,
          "isGateway": true,
          "forceAddress": false,
          "hairpinMode": true,
          "ipMasq": false,
          "bridge": "storage",
          "ipam": {
            "type": "whereabouts",
            "range": "172.18.0.0/24",
            "range_start": "172.18.0.30",
            "range_end": "172.18.0.70",
            "routes": [
               {"dst": "172.31.0.0/24", "gw": "172.18.0.1"},
               {"dst": "192.168.188.0/24", "gw": "172.18.0.1"},
               {"dst": "99.99.0.0/16", "gw": "172.18.0.1"}
            ]
          }
        }

Data plane nodes are configured by the OpenStack Operator and your OpenStackDataPlaneNodeSet custom resources (CRs). The OpenStackDataPlaneNodeSet CRs define your desired network configuration for the nodes.

Your Red Hat OpenStack Services on OpenShift (RHOSO) network configuration should reflect the existing Red Hat OpenStack Platform (RHOSP) network setup. You must pull the network_data.yaml files from each RHOSP node and reuse them when you define the OpenStackDataPlaneNodeSet CRs. The format of the configuration does not change, so you can put network templates under edpm_network_config_template variables, either for all nodes or for each node.

Procedure

  1. Configure a NetConfig CR with your desired VLAN tags and IPAM configuration. For example:

    apiVersion: network.openstack.org/v1beta1
    kind: NetConfig
    metadata:
      name: netconfig
    spec:
      networks:
      - name: internalapi
        dnsDomain: internalapi.example.com
        subnets:
        - name: subnet1
          allocationRanges:
          - end: 172.17.0.250
            start: 172.17.0.100
          cidr: 172.17.0.0/24
          vlan: 20
      - name: storage
        dnsDomain: storage.example.com
        subnets:
        - name: subnet1
          allocationRanges:
          - end: 172.18.0.250
            start: 172.18.0.100
          cidr: 172.18.0.0/24
          vlan: 21
      - name: tenant
        dnsDomain: tenant.example.com
        subnets:
        - name: subnet1
          allocationRanges:
          - end: 172.19.0.250
            start: 172.19.0.100
          cidr: 172.19.0.0/24
          vlan: 22

    where:

    spec.networks
    Specifies the networks composition. The networks composition must match the source cloud configuration to avoid data plane connectivity downtime.
  2. Optional: In the NetConfig CR, list multiple ranges for the allocationRanges field to exclude some of the IP addresses, for example, to accommodate IP addresses that are already consumed by the adopted environment:

    apiVersion: network.openstack.org/v1beta1
    kind: NetConfig
    metadata:
      name: netconfig
    spec:
      networks:
      - name: internalapi
        dnsDomain: internalapi.example.com
        subnets:
        - name: subnet1
          allocationRanges:
          - end: 172.17.0.199
            start: 172.17.0.100
          - end: 172.17.0.250
            start: 172.17.0.201
          cidr: 172.17.0.0/24
          vlan: 20

    This example excludes the 172.17.0.200 address from the pool.

When you adopt a Red Hat OpenStack Platform (RHOSP) deployment with spine-leaf networking, like a Distributed Compute Node (DCN) architecture, you must each L2 network segment with a separate IP subnet and create create routed provider networks. Traffic between sites is routed at L3 through spine routers or similar network infrastructure.

You must configure routing for Compute nodes at edge sites to connect with control plane services, such as RabbitMQ or the database at the central site. The cloud will not function correctly without routes configured.

Note

DHCP relay is not supported in adopted Red Hat OpenStack Services on OpenShift (RHOSO) environments with spine-leaf topologies. This affects bare-metal provisioning scenarios that use PXE boot.

If you need to provision bare-metal nodes at edge sites, use Redfish virtual media or similar BMC virtual media features instead of PXE boot.

Expand
Table 1.5. Example routes required on DCN1 Compute nodes
Destination networkNext hopPurpose

172.17.0.0/24

172.17.10.1

Route to central internalapi

172.17.20.0/24

172.17.10.1

Route to DCN2 internalapi

172.18.0.0/24

172.18.10.1

Route to central storage

172.18.20.0/24

172.18.10.1

Route to DCN2 storage

You configure these routes in the edpm_network_config_template within the OpenStackDataPlaneNodeSet custom resource (CR) for each site.

Expand
Table 1.6. Example network topology for a three-site DCN deployment
NetworkCentral siteDCN1 siteDCN2 site

Control plane

192.168.122.0/24

192.168.133.0/24

192.168.144.0/24

Internal API

172.17.0.0/24

172.17.10.0/24

172.17.20.0/24

Storage

172.18.0.0/24

172.18.10.0/24

172.18.20.0/24

Tenant

172.19.0.0/24

172.19.10.0/24

172.19.20.0/24

When you adopt a spine-leaf deployment, you configure the NetConfig CR with multiple subnets for each service network. Each subnet represents a different site.

Example NetConfig with multiple subnets per network

apiVersion: network.openstack.org/v1beta1
kind: NetConfig
metadata:
  name: netconfig
spec:
  networks:
  - name: ctlplane
    dnsDomain: ctlplane.example.com
    subnets:
    - name: subnet1              # Central site
      allocationRanges:
      - end: 192.168.122.120
        start: 192.168.122.100
      cidr: 192.168.122.0/24
      gateway: 192.168.122.1
    - name: ctlplanedcn1         # DCN1 site
      allocationRanges:
      - end: 192.168.133.120
        start: 192.168.133.100
      cidr: 192.168.133.0/24
      gateway: 192.168.133.1
    - name: ctlplanedcn2         # DCN2 site
      allocationRanges:
      - end: 192.168.144.120
        start: 192.168.144.100
      cidr: 192.168.144.0/24
      gateway: 192.168.144.1
  - name: internalapi
    dnsDomain: internalapi.example.com
    subnets:
    - name: subnet1              # Central site
      allocationRanges:
      - end: 172.17.0.250
        start: 172.17.0.100
      cidr: 172.17.0.0/24
      vlan: 20
    - name: internalapidcn1      # DCN1 site
      allocationRanges:
      - end: 172.17.10.250
        start: 172.17.10.100
      cidr: 172.17.10.0/24
      vlan: 30
    - name: internalapidcn2      # DCN2 site
      allocationRanges:
      - end: 172.17.20.250
        start: 172.17.20.100
      cidr: 172.17.20.0/24
      vlan: 40

  • Each network defines multiple subnets, one for each site.
  • Each site uses unique VLAN IDs. In this example, central uses VLANs 20-23, DCN1 uses VLANs 30-33, and DCN2 uses VLANs 40-43.
  • The subnet naming convention typically uses subnet1 for the central site and site-specific names like internalapidcn1 for edge sites.

Because the sites are geopgraphically distributed, each site requires its own provider network (physnet). The Networking service (neutron) must be configured to recognize all physnets.

Example Neutron ML2 configuration for multiple physnets

[ml2_type_vlan]
network_vlan_ranges = leaf0:1:1000,leaf1:1:1000,leaf2:1:1000

[neutron]
physnets = leaf0,leaf1,leaf2

  • leaf0 corresponds to the central site.
  • leaf1 corresponds to the DCN1 site.
  • leaf2 corresponds to the DCN2 site.

When you create routed provider networks in RHOSO, you create network segments that map to these physnets:

  • Segment for central: physnet=leaf0, subnet=192.168.122.0/24
  • Segment for DCN1: physnet=leaf1, subnet=192.168.133.0/24
  • Segment for DCN2: physnet=leaf2, subnet=192.168.144.0/24

1.11. Storage requirements

Storage in a Red Hat OpenStack Platform (RHOSP) deployment refers to the following types:

  • The storage that is needed for the service to run
  • The storage that the service manages

Before you can deploy the services in Red Hat OpenStack Services on OpenShift (RHOSO), you must review the storage requirements, plan your Red Hat OpenShift Container Platform (RHOCP) node selection, prepare your RHOCP nodes, and so on.

1.11.1. Storage driver certification

Before you adopt your Red Hat OpenStack Platform 17.1 deployment to a Red Hat OpenStack Services on OpenShift (RHOSO) 18.0 deployment, confirm that your deployed storage drivers are certified for use with RHOSO 18.0. For information on software certified for use with RHOSO 18.0, see the Red Hat Ecosystem Catalog.

1.11.2. Block Storage service guidelines

Prepare to adopt your Block Storage service (cinder):

  • Take note of the Block Storage service back ends that you use.
  • Determine all the transport protocols that the Block Storage service back ends use, such as RBD, iSCSI, FC, NFS, NVMe-TCP, and so on. You must consider them when you place the Block Storage services and when the right storage transport-related binaries are running on the Red Hat OpenShift Container Platform (RHOCP) nodes. For more information about each storage transport protocol, see RHOCP preparation for Block Storage service adoption.
  • Use a Block Storage service volume service to deploy each Block Storage service volume back end.

    For example, you have an LVM back end, a Ceph back end, and two entries in cinderVolumes, and you cannot set global defaults for all volume services. You must define a service for each of them:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
      cinder:
        enabled: true
        template:
          cinderVolumes:
            lvm:
              customServiceConfig: |
                [DEFAULT]
                debug = True
                [lvm]
    < . . . >
            ceph:
              customServiceConfig: |
                [DEFAULT]
                debug = True
                [ceph]
    < . . . >
    Warning

    Check that all configuration options are still valid for RHOSO 18.0 version. Configuration options might be deprecated, removed, or added. This applies to both back-end driver-specific configuration options and other generic options.

Before you begin the Block Storage service (cinder) adoption, review the following limitations:

  • There is no global nodeSelector option for all Block Storage service volumes. You must specify the nodeSelector for each back end.
  • There are no global customServiceConfig or customServiceConfigSecrets options for all Block Storage service volumes. You must specify these options for each back end.
  • Support for Block Storage service back ends that require kernel modules that are not included in Red Hat Enterprise Linux is not tested in Red Hat OpenStack Services on OpenShift (RHOSO).

Before you deploy Red Hat OpenStack Platform (RHOSP) in Red Hat OpenShift Container Platform (RHOCP) nodes, ensure that the networks are ready, that you decide which RHOCP nodes to restrict, and that you make any necessary changes to the RHOCP nodes.

Node selection

You might need to restrict the RHOCP nodes where the Block Storage service volume and backup services run.

An example of when you need to restrict nodes for a specific Block Storage service is when you deploy the Block Storage service with the LVM driver. In that scenario, the LVM data where the volumes are stored only exists in a specific host, so you need to pin the Block Storage-volume service to that specific RHOCP node. Running the service on any other RHOCP node does not work. You cannot use the RHOCP host node name to restrict the LVM back end. You need to identify the LVM back end by using a unique label, an existing label, or a new label:

$ oc label nodes worker0 lvm=cinder-volumes
apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
  name: openstack
spec:
  secret: osp-secret
  storageClass: local-storage
  cinder:
    enabled: true
    template:
      cinderVolumes:
        lvm-iscsi:
          nodeSelector:
            lvm: cinder-volumes
< . . . >

For more information about node selection, see About node selectors.

Note

If your nodes do not have enough local disk space for temporary images, you can use a remote NFS location by setting the extra volumes feature, extraMounts.

Transport protocols

Some changes to the storage transport protocols might be required for RHOCP:

  • If you use a MachineConfig to make changes to RHOCP nodes, the nodes reboot.
  • Check the back-end sections that are listed in the enabled_backends configuration option in your cinder.conf file to determine the enabled storage back-end sections.
  • Depending on the back end, you can find the transport protocol by viewing the volume_driver or target_protocol configuration options.
  • The iscsid service, multipathd service, and NVMe-TCP kernel modules start automatically on data plane nodes.

    NFS
    • RHOCP connects to NFS back ends without additional changes.
    Rados Block Device and Red Hat Ceph Storage
    • RHOCP connects to Red Hat Ceph Storage back ends without additional changes. You must provide credentials and configuration files to the services.
    iSCSI
    • To connect to iSCSI volumes, the iSCSI initiator must run on the RHOCP hosts where the volume and backup services run. The Linux Open iSCSI initiator does not support network namespaces, so you must only run one instance of the service for the normal RHOCP usage, as well as the RHOCP CSI plugins and the RHOSP services.
    • If you are not already running iscsid on the RHOCP nodes, then you must apply a MachineConfig. For example:

      apiVersion: machineconfiguration.openshift.io/v1
      kind: MachineConfig
      metadata:
        labels:
          machineconfiguration.openshift.io/role: worker
          service: cinder
        name: 99-master-cinder-enable-iscsid
      spec:
        config:
          ignition:
            version: 3.2.0
          systemd:
            units:
            - enabled: true
              name: iscsid.service
    • If you use labels to restrict the nodes where the Block Storage services run, you must use a MachineConfigPool to limit the effects of the MachineConfig to the nodes where your services might run. For more information, see About node selectors.
    • If you are using a single node deployment to test the process, replace worker with master in the MachineConfig.
    • For production deployments that use iSCSI volumes, configure multipathing for better I/O.
    FC
    • The Block Storage service volume and Block Storage service backup services must run in an RHOCP host that has host bus adapters (HBAs). If some nodes do not have HBAs, then use labels to restrict where these services run. For more information, see About node selectors.
    • If the Image service is configured to use Block Storage service as a back end with FC, the Image service must also run on an RHOCP host that has HBAs and follow the same node selection requirements as the Block Storage service.
    • If you have virtualized RHOCP clusters that use FC you need to expose the host HBAs inside the virtual machine.
    • For production deployments that use FC volumes, configure multipathing for better I/O.
    NVMe-TCP
    • To connect to NVMe-TCP volumes, load NVMe-TCP kernel modules on the RHOCP hosts.
    • If you do not already load the nvme-fabrics module on the RHOCP nodes where the volume and backup services are going to run, then you must apply a MachineConfig. For example:

      apiVersion: machineconfiguration.openshift.io/v1
      kind: MachineConfig
      metadata:
        labels:
          machineconfiguration.openshift.io/role: worker
          service: cinder
        name: 99-master-cinder-load-nvme-fabrics
      spec:
        config:
          ignition:
            version: 3.2.0
          storage:
            files:
              - path: /etc/modules-load.d/nvme_fabrics.conf
                overwrite: false
                # Mode must be decimal, this is 0644
                mode: 420
                user:
                  name: root
                group:
                  name: root
                contents:
                  # Source can be a http, https, tftp, s3, gs, or data as defined in rfc2397.
                  # This is the rfc2397 text/plain string format
                  source: data:,nvme-fabrics
    • If you use labels to restrict the nodes where Block Storage services run, use a MachineConfigPool to limit the effects of the MachineConfig to the nodes where your services run. For more information, see About node selectors.
    • If you use a single node deployment to test the process, replace worker with master in the MachineConfig.
    • Only load the nvme-fabrics module because it loads the transport-specific modules, such as TCP, RDMA, or FC, as needed.
    • For production deployments that use NVMe-TCP volumes, use multipathing for better I/O. For NVMe-TCP volumes, RHOCP uses native multipathing, called ANA.
    • After the RHOCP nodes reboot and load the nvme-fabrics module, you can confirm that the operating system is configured and that it supports ANA by checking the host:

      $ cat /sys/module/nvme_core/parameters/multipath
      Important

      ANA does not use the Linux Multipathing Device Mapper, but RHOCP requires multipathd to run on Compute nodes for the Compute service (nova) to be able to use multipathing. Multipathing is automatically configured on data plane nodes when they are provisioned.

    Multipathing
    • Use multipathing for iSCSI and FC protocols. To configure multipathing on these protocols, you perform the following tasks:

      • Prepare the RHOCP hosts
      • Configure the Block Storage services
      • Prepare the Compute service nodes
      • Configure the Compute service
    • To prepare the RHOCP hosts, ensure that the Linux Multipath Device Mapper is configured and running on the RHOCP hosts by using MachineConfig. For example:

      # Includes the /etc/multipathd.conf contents and the systemd unit changes
      apiVersion: machineconfiguration.openshift.io/v1
      kind: MachineConfig
      metadata:
        labels:
          machineconfiguration.openshift.io/role: worker
          service: cinder
        name: 99-master-cinder-enable-multipathd
      spec:
        config:
          ignition:
            version: 3.2.0
          storage:
            files:
              - path: /etc/multipath.conf
                overwrite: false
                # Mode must be decimal, this is 0600
                mode: 384
                user:
                  name: root
                group:
                  name: root
                contents:
                  # Source can be a http, https, tftp, s3, gs, or data as defined in rfc2397.
                  # This is the rfc2397 text/plain string format
                  source: data:,defaults%20%7B%0A%20%20user_friendly_names%20no%0A%20%20recheck_wwid%20yes%0A%20%20skip_kpartx%20yes%0A%20%20find_multipaths%20yes%0A%7D%0A%0Ablacklist%20%7B%0A%7D
          systemd:
            units:
            - enabled: true
              name: multipathd.service
    • If you use labels to restrict the nodes where Block Storage services run, you need to use a MachineConfigPool to limit the effects of the MachineConfig to only the nodes where your services run. For more information, see About node selectors.
    • If you are using a single node deployment to test the process, replace worker with master in the MachineConfig.
    • Cinder volume and backup are configured by default to use multipathing.

In your previous deployment, you use the same cinder.conf file for all the services. To prepare your Block Storage service (cinder) configuration for adoption, split this single-file configuration into individual configurations for each Block Storage service service. Review the following information to guide you in coverting your previous configuration:

  • Determine what part of the configuration is generic for all the Block Storage services and remove anything that would change when deployed in Red Hat OpenShift Container Platform (RHOCP), such as the connection in the [database] section, the transport_url and log_dir in the [DEFAULT] sections, the whole [coordination] and [barbican] sections. The remaining generic configuration goes into the customServiceConfig option, or a Secret custom resource (CR) and is then used in the customServiceConfigSecrets section, at the cinder: template: level.
  • Determine if there is a scheduler-specific configuration and add it to the customServiceConfig option in cinder: template: cinderScheduler.
  • Determine if there is an API-specific configuration and add it to the customServiceConfig option in cinder: template: cinderAPI.
  • If the Block Storage service backup is deployed, add the Block Storage service backup configuration options to customServiceConfig option, or to a Secret CR that you can add to customServiceConfigSecrets section at the cinder: template: cinderBackup: level. Remove the host configuration in the [DEFAULT] section to support multiple replicas later.
  • Determine the individual volume back-end configuration for each of the drivers. The configuration is in the specific driver section, and it includes the [backend_defaults] section and FC zoning sections if you use them. The Block Storage service operator does not support a global customServiceConfig option for all volume services. Each back end has its own section under cinder: template: cinderVolumes, and the configuration goes in the customServiceConfig option or in a Secret CR and is then used in the customServiceConfigSecrets section.
  • If any of the Block Storage service volume drivers require a custom vendor image, find the location of the image in the Red Hat Ecosystem Catalog, and create or modify an OpenStackVersion CR to specify the custom image by using the key from the cinderVolumes section.

    For example, if you have the following configuration:

    spec:
      cinder:
        enabled: true
        template:
          cinderVolume:
            pure:
              customServiceConfigSecrets:
                - openstack-cinder-pure-cfg
    < . . . >

    Then the OpenStackVersion CR that describes the container image for that back end looks like the following example:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackVersion
    metadata:
      name: openstack
    spec:
      customContainerImages:
        cinderVolumeImages:
          pure: registry.connect.redhat.com/purestorage/openstack-cinder-volume-pure-rhosp-18-0'
    Note

    The name of the OpenStackVersion must match the name of your OpenStackControlPlane CR.

  • If your Block Storage services use external files, for example, for a custom policy, or to store credentials or SSL certificate authority bundles to connect to a storage array, make those files available to the right containers. Use Secrets or ConfigMap to store the information in RHOCP and then in the extraMounts key. For example, for Red Hat Ceph Storage credentials that are stored in a Secret called ceph-conf-files, you patch the top-level extraMounts key in the OpenstackControlPlane CR:

    spec:
      extraMounts:
      - extraVol:
        - extraVolType: Ceph
          mounts:
          - mountPath: /etc/ceph
            name: ceph
            readOnly: true
          propagation:
          - CinderVolume
          - CinderBackup
          - Glance
          volumes:
          - name: ceph
            projected:
              sources:
              - secret:
                  name: ceph-conf-files
  • For a service-specific file, such as the API policy, you add the configuration on the service itself. In the following example, you include the CinderAPI configuration that references the policy you are adding from a ConfigMap called my-cinder-conf that has a policy key with the contents of the policy:

    spec:
      cinder:
        enabled: true
        template:
          cinderAPI:
            customServiceConfig: |
               [oslo_policy]
               policy_file=/etc/cinder/api/policy.yaml
          extraMounts:
          - extraVol:
            - extraVolType: Ceph
              mounts:
              - mountPath: /etc/cinder/api
                name: policy
                readOnly: true
              propagation:
              - CinderAPI
              volumes:
              - name: policy
                projected:
                  sources:
                  - configMap:
                      name: my-cinder-conf
                      items:
                        - key: policy
                          path: policy.yaml

1.11.6. Changes to CephFS through NFS

Before you begin the adoption, review the following information to understand the changes to CephFS through NFS between Red Hat OpenStack Platform (RHOSP) 17.1 and Red Hat OpenStack Services on OpenShift (RHOSO) 18.0:

  • If the RHOSP 17.1 deployment uses CephFS through NFS as a back end for Shared File Systems service (manila), you cannot directly import the ceph-nfs service on the RHOSP Controller nodes into RHOSO 18.0. In RHOSO 18.0, the Shared File Systems service only supports using a clustered NFS service that is directly managed on the Red Hat Ceph Storage cluster. Adoption with the ceph-nfs service involves a data path disruption to existing NFS clients.
  • On RHOSP 17.1, Pacemaker manages the high availability of the ceph-nfs service. This service is assigned a Virtual IP (VIP) address that is also managed by Pacemaker. The VIP is typically created on an isolated StorageNFS network. The Controller nodes have ordering and collocation constraints established between this VIP, ceph-nfs, and the Shared File Systems service (manila) share manager service. Prior to adopting Shared File Systems service, you must adjust the Pacemaker ordering and collocation constraints to separate the share manager service. This establishes ceph-nfs with its VIP as an isolated, standalone NFS service that you can decommission after completing the RHOSO adoption.
  • In Red Hat Ceph Storage 7, a native clustered Ceph NFS service has to be deployed on the Red Hat Ceph Storage cluster by using the Ceph Orchestrator prior to adopting the Shared File Systems service. This NFS service eventually replaces the standalone NFS service from RHOSP 17.1 in your deployment. When the Shared File Systems service is adopted into the RHOSO 18.0 environment, it establishes all the existing exports and client restrictions on the new clustered Ceph NFS service. Clients can continue to read and write data on existing NFS shares, and are not affected until the old standalone NFS service is decommissioned. After the service is decommissioned, you can re-mount the same share from the new clustered Ceph NFS service during a scheduled downtime.
  • To ensure that NFS users are not required to make any networking changes to their existing workloads, assign an IP address from the same isolated StorageNFS network to the clustered Ceph NFS service. NFS users only need to discover and re-mount their shares by using new export paths. When the adoption is complete, RHOSO users can query the Shared File Systems service API to list the export locations on existing shares to identify the preferred paths to mount these shares. These preferred paths correspond to the new clustered Ceph NFS service in contrast to other non-preferred export paths that continue to be displayed until the old isolated, standalone NFS service is decommissioned.
  • When you migrate your workloads from the old NFS service, you must ensure that exports are not consumed from both the old NFS service and the new clustered Ceph NFS service at the same time. This simultaneous access to both services is considered dangerous and bypasses the protections for concurrent access that is ensured by the NFS protocol. When you migrate the workloads to use exports from the new NFS service, you must ensure that you migrate the use of each export entirely so that no part of the workload stays connected to the old NFS service.
  • You can no longer control the old Pacemaker-managed ceph-nfs service through the Red Hat OpenStack Platform director after the control plane adoption is complete. This means that there is no support for updating the NFS Ganesha software, or changing any configuration. While data is protected from server crashes or restarts, high availability and data recovery is still limited, and these maintenance issues are no longer visible to Shared File Systems service.
  • Cloud administrators must ensure a reasonably short window to switch over all end-user workloads to the new NFS service.
  • While the old ceph-nfs service only supported NFS version 4.1 and later, the new clustered NFS service supports NFS protocols 3 and 4.1 and later. Mixing protocol versions with an export results in unintended consequences. You should mount a given share across all clients by using a consistent NFS protocol version.

1.12. Red Hat Ceph Storage prerequisites

Before you migrate your Red Hat Ceph Storage cluster daemons from your Controller nodes, you must complete the following tasks in your Red Hat OpenStack Platform 17.1 environment to prepare for the Red Hat OpenStack Services on OpenShift (RHOSO) adoption.

  • Upgrade your Red Hat Ceph Storage cluster to release 7. For more information, see "Upgrading Red Hat Ceph Storage 6 to 7" in Framework for upgrades (16.2 to 17.1).
  • Your Red Hat Ceph Storage 7 deployment is managed by cephadm.
  • The undercloud is still available, and the nodes and networks are managed by director.
  • If you use an externally deployed Red Hat Ceph Storage cluster, you must recreate a ceph-nfs cluster in the target nodes as well as propogate the StorageNFS network.
  • Complete the prerequisites for your specific Red Hat Ceph Storage environment:

    • Red Hat Ceph Storage with monitoring stack components
    • Red Hat Ceph Storage RGW
    • Red Hat Ceph Storage RBD
    • NFS Ganesha

Before you migrate a Red Hat Ceph Storage cluster with monitoring stack components, you must gather monitoring stack information, review and update the container image registry, and remove the undercloud container images.

Note

In addition to updating the container images related to the monitoring stack, you must update the configuration entry related to the container_image_base. This has an impact on all the Red Hat Ceph Storage daemons that rely on the undercloud images. New daemons are deployed by using the new image registry location that is configured in the Red Hat Ceph Storage cluster.

Procedure

  1. Gather the current status of the monitoring stack. Verify that the hosts have no monitoring label, or grafana, prometheus, or alertmanager, in cases of a per daemons placement evaluation:

    Note

    The entire relocation process is driven by cephadm and relies on labels to be assigned to the target nodes, where the daemons are scheduled. For more information about assigning labels to nodes, review the Red Hat Knowledgebase article Red Hat Ceph Storage: Supported configurations.

    [tripleo-admin@controller-0 ~]$ sudo cephadm shell -- ceph orch host ls
    
    HOST                    	ADDR       	LABELS                 	STATUS
    cephstorage-0.redhat.local  192.168.24.11  osd mds
    cephstorage-1.redhat.local  192.168.24.12  osd mds
    cephstorage-2.redhat.local  192.168.24.47  osd mds
    controller-0.redhat.local   192.168.24.35  _admin mon mgr
    controller-1.redhat.local   192.168.24.53  mon _admin mgr
    controller-2.redhat.local   192.168.24.10  mon _admin mgr
    6 hosts in cluster

    Confirm that the cluster is healthy and that both ceph orch ls and ceph orch ps return the expected number of deployed daemons.

  2. Review and update the container image registry:

    Note

    If you run the Red Hat Ceph Storage externalization procedure after you migrate the Red Hat OpenStack Platform control plane, update the container images in the Red Hat Ceph Storage cluster configuration. The current container images point to the undercloud registry, which might not be available anymore. Because the undercloud is not available after adoption is complete, replace the undercloud-provided images with an alternative registry.

    $ ceph config dump
    ...
    ...
    mgr   advanced  mgr/cephadm/container_image_alertmanager    undercloud-0.ctlplane.redhat.local:8787/rh-osbs/openshift-ose-prometheus-alertmanager:v4.10
    mgr   advanced  mgr/cephadm/container_image_base            undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhceph
    mgr   advanced  mgr/cephadm/container_image_grafana         undercloud-0.ctlplane.redhat.local:8787/rh-osbs/grafana:latest
    mgr   advanced  mgr/cephadm/container_image_node_exporter   undercloud-0.ctlplane.redhat.local:8787/rh-osbs/openshift-ose-prometheus-node-exporter:v4.10
    mgr   advanced  mgr/cephadm/container_image_prometheus      undercloud-0.ctlplane.redhat.local:8787/rh-osbs/openshift-ose-prometheus:v4.10
  3. Remove the undercloud container images:

    $ cephadm shell -- ceph config rm mgr mgr/cephadm/container_image_base \
    for i in prometheus grafana alertmanager node_exporter; do \
        cephadm shell -- ceph config rm mgr mgr/cephadm/container_image_$i \
    done

Complete the following prerequisites before you begin the Ceph Object Gateway (RGW) migration.

Procedure

  1. Check the current status of the Red Hat Ceph Storage nodes:

    (undercloud) [stack@undercloud-0 ~]$ metalsmith list
    
    
        +------------------------+    +----------------+
        | IP Addresses           |    |  Hostname      |
        +------------------------+    +----------------+
        | ctlplane=192.168.24.25 |    | cephstorage-0  |
        | ctlplane=192.168.24.10 |    | cephstorage-1  |
        | ctlplane=192.168.24.32 |    | cephstorage-2  |
        | ctlplane=192.168.24.28 |    | compute-0      |
        | ctlplane=192.168.24.26 |    | compute-1      |
        | ctlplane=192.168.24.43 |    | controller-0   |
        | ctlplane=192.168.24.7  |    | controller-1   |
        | ctlplane=192.168.24.41 |    | controller-2   |
        +------------------------+    +----------------+
  2. Log in to controller-0 and check the Pacemaker status to identify important information for the RGW migration:

    Full List of Resources:
      * ip-192.168.24.46	(ocf:heartbeat:IPaddr2):     	Started controller-0
      * ip-10.0.0.103   	(ocf:heartbeat:IPaddr2):     	Started controller-1
      * ip-172.17.1.129 	(ocf:heartbeat:IPaddr2):     	Started controller-2
      * ip-172.17.3.68  	(ocf:heartbeat:IPaddr2):     	Started controller-0
      * ip-172.17.4.37  	(ocf:heartbeat:IPaddr2):     	Started controller-1
      * Container bundle set: haproxy-bundle
    
    [undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp17-openstack-haproxy:pcmklatest]:
        * haproxy-bundle-podman-0   (ocf:heartbeat:podman):  Started controller-2
        * haproxy-bundle-podman-1   (ocf:heartbeat:podman):  Started controller-0
        * haproxy-bundle-podman-2   (ocf:heartbeat:podman):  Started controller-1
  3. Identify the ranges of the storage networks. The following is an example and the values might differ in your environment:

    [heat-admin@controller-0 ~]$ ip -o -4 a
    
    1: lo	inet 127.0.0.1/8 scope host lo\   	valid_lft forever preferred_lft forever
    2: enp1s0	inet 192.168.24.45/24 brd 192.168.24.255 scope global enp1s0\   	valid_lft forever preferred_lft forever
    2: enp1s0	inet 192.168.24.46/32 brd 192.168.24.255 scope global enp1s0\   	valid_lft forever preferred_lft forever
    7: br-ex	inet 10.0.0.122/24 brd 10.0.0.255 scope global br-ex\   	valid_lft forever preferred_lft forever
    8: vlan70	inet 172.17.5.22/24 brd 172.17.5.255 scope global vlan70\   	valid_lft forever preferred_lft forever
    8: vlan70	inet 172.17.5.94/32 brd 172.17.5.255 scope global vlan70\   	valid_lft forever preferred_lft forever
    9: vlan50	inet 172.17.2.140/24 brd 172.17.2.255 scope global vlan50\   	valid_lft forever preferred_lft forever
    10: vlan30	inet 172.17.3.73/24 brd 172.17.3.255 scope global vlan30\   	valid_lft forever preferred_lft forever
    10: vlan30	inet 172.17.3.68/32 brd 172.17.3.255 scope global vlan30\   	valid_lft forever preferred_lft forever
    11: vlan20	inet 172.17.1.88/24 brd 172.17.1.255 scope global vlan20\   	valid_lft forever preferred_lft forever
    12: vlan40	inet 172.17.4.24/24 brd 172.17.4.255 scope global vlan40\   	valid_lft forever preferred_lft forever
    • br-ex represents the External Network, where in the current environment, HAProxy has the front-end Virtual IP (VIP) assigned.
    • vlan30 represents the Storage Network, where the new RGW instances should be started on the Red Hat Ceph Storage nodes.
  4. Identify the network that you previously had in HAProxy and propagate it through director to the Red Hat Ceph Storage nodes. Use this network to reserve a new VIP that is owned by Red Hat Ceph Storage as the entry point for the RGW service.

    1. Log in to controller-0 and find the ceph_rgw section in the current HAProxy configuration:

      $ less /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg
      ...
      ...
      listen ceph_rgw
        bind 10.0.0.103:8080 transparent
        bind 172.17.3.68:8080 transparent
        mode http
        balance leastconn
        http-request set-header X-Forwarded-Proto https if { ssl_fc }
        http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
        http-request set-header X-Forwarded-Port %[dst_port]
        option httpchk GET /swift/healthcheck
        option httplog
        option forwardfor
        server controller-0.storage.redhat.local 172.17.3.73:8080 check fall 5 inter 2000 rise 2
        server controller-1.storage.redhat.local 172.17.3.146:8080 check fall 5 inter 2000 rise 2
        server controller-2.storage.redhat.local 172.17.3.156:8080 check fall 5 inter 2000 rise 2
    2. Confirm that the network is used as an HAProxy front end. The following example shows that controller-0 exposes the services by using the external network, which is absent from the Red Hat Ceph Storage nodes. You must propagate the external network through director:

      [controller-0]$ ip -o -4 a
      
      ...
      7: br-ex	inet 10.0.0.106/24 brd 10.0.0.255 scope global br-ex\   	valid_lft forever preferred_lft forever
      ...
      Note

      If the target nodes are not managed by director, you cannot use this procedure to configure the network. An administrator must manually configure all the required networks.

  5. Propagate the HAProxy front-end network to Red Hat Ceph Storage nodes.

    1. In the NIC template that you use to define the ceph-storage network interfaces, add the new config section in the Red Hat Ceph Storage network configuration template file, for example, /home/stack/composable_roles/network/nic-configs/ceph-storage.j2:

      ---
      network_config:
      - type: interface
        name: nic1
        use_dhcp: false
        dns_servers: {{ ctlplane_dns_nameservers }}
        addresses:
        - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
        routes: {{ ctlplane_host_routes }}
      - type: vlan
        vlan_id: {{ storage_mgmt_vlan_id }}
        device: nic1
        addresses:
        - ip_netmask: {{ storage_mgmt_ip }}/{{ storage_mgmt_cidr }}
        routes: {{ storage_mgmt_host_routes }}
      - type: interface
        name: nic2
        use_dhcp: false
        defroute: false
      - type: vlan
        vlan_id: {{ storage_vlan_id }}
        device: nic2
        addresses:
        - ip_netmask: {{ storage_ip }}/{{ storage_cidr }}
        routes: {{ storage_host_routes }}
      - type: ovs_bridge
        name: {{ neutron_physical_bridge_name }}
        dns_servers: {{ ctlplane_dns_nameservers }}
        domain: {{ dns_search_domains }}
        use_dhcp: false
        addresses:
        - ip_netmask: {{ external_ip }}/{{ external_cidr }}
        routes: {{ external_host_routes }}
        members: []
        - type: interface
          name: nic3
          primary: true
    2. Add the External Network to the bare metal file, for example, /home/stack/composable_roles/network/baremetal_deployment.yaml that is used by metalsmith:

      Note

      Ensure that network_config_update is enabled for network propagation to the target nodes when os-net-config is triggered.

      - name: CephStorage
        count: 3
        hostname_format: cephstorage-%index%
        instances:
        - hostname: cephstorage-0
        name: ceph-0
        - hostname: cephstorage-1
        name: ceph-1
        - hostname: cephstorage-2
        name: ceph-2
        defaults:
        profile: ceph-storage
        network_config:
            template: /home/stack/composable_roles/network/nic-configs/ceph-storage.j2
            network_config_update: true
        networks:
        - network: ctlplane
            vif: true
        - network: storage
        - network: storage_mgmt
        - network: external
    3. Configure the new network on the bare metal nodes:

      (undercloud) [stack@undercloud-0]$ openstack overcloud node provision \
         -o overcloud-baremetal-deployed-0.yaml \
         --stack overcloud \
         --network-config -y \
        $PWD/composable_roles/network/baremetal_deployment.yaml
    4. Verify that the new network is configured on the Red Hat Ceph Storage nodes:

      [root@cephstorage-0 ~]# ip -o -4 a
      
      1: lo	inet 127.0.0.1/8 scope host lo\   	valid_lft forever preferred_lft forever
      2: enp1s0	inet 192.168.24.54/24 brd 192.168.24.255 scope global enp1s0\   	valid_lft forever preferred_lft forever
      11: vlan40	inet 172.17.4.43/24 brd 172.17.4.255 scope global vlan40\   	valid_lft forever preferred_lft forever
      12: vlan30	inet 172.17.3.23/24 brd 172.17.3.255 scope global vlan30\   	valid_lft forever preferred_lft forever
      14: br-ex	inet 10.0.0.133/24 brd 10.0.0.255 scope global br-ex\   	valid_lft forever preferred_lft forever

Complete the following prerequisites before you begin the Red Hat Ceph Storage Rados Block Device (RBD) migration.

  • The target CephStorage or ComputeHCI nodes are configured to have both storage and storage_mgmt networks. This ensures that you can use both Red Hat Ceph Storage public and cluster networks from the same node. From Red Hat OpenStack Platform 17.1 and later you do not have to run a stack update.
  • NFS Ganesha is migrated from a director deployment to cephadm. For more information, see "Creating an NFS Ganesha cluster".
  • Ceph Metadata Server, monitoring stack, Ceph Object Gateway, and any other daemon that is deployed on Controller nodes.
  • The daemons distribution follows the cardinality constraints that are described in "Red Hat Ceph Storage: Supported configurations".
  • The Red Hat Ceph Storage cluster is healthy, and the ceph -s command returns HEALTH_OK.
  • Run os-net-config on the bare metal node and configure additional networks:

    1. If target nodes are CephStorage, ensure that the network is defined in the bare metal file for the CephStorage nodes, for example, /home/stack/composable_roles/network/baremetal_deployment.yaml:

      - name: CephStorage
      count: 2
      instances:
      - hostname: oc0-ceph-0
      name: oc0-ceph-0
      - hostname: oc0-ceph-1
      name: oc0-ceph-1
      defaults:
      networks:
      - network: ctlplane
      vif: true
      - network: storage_cloud_0
      subnet: storage_cloud_0_subnet
      - network: storage_mgmt_cloud_0
      subnet: storage_mgmt_cloud_0_subnet
      network_config:
      template: templates/single_nic_vlans/single_nic_vlans_storage.j2
    2. Add the missing network:

      $ openstack overcloud node provision \
      -o overcloud-baremetal-deployed-0.yaml --stack overcloud-0 \
      /--network-config -y --concurrency 2 /home/stack/metalsmith-0.yaml
    3. Verify that the storage network is configured on the target nodes:

      (undercloud) [stack@undercloud ~]$ ssh heat-admin@192.168.24.14 ip -o -4 a
      1: lo    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever
      5: br-storage    inet 192.168.24.14/24 brd 192.168.24.255 scope global br-storage\       valid_lft forever preferred_lft forever
      6: vlan1    inet 192.168.24.14/24 brd 192.168.24.255 scope global vlan1\       valid_lft forever preferred_lft forever
      7: vlan11    inet 172.16.11.172/24 brd 172.16.11.255 scope global vlan11\       valid_lft forever preferred_lft forever
      8: vlan12    inet 172.16.12.46/24 brd 172.16.12.255 scope global vlan12\       valid_lft forever preferred_lft forever

1.12.4. Creating an NFS Ganesha cluster

If you use CephFS through NFS with the Shared File Systems service (manila), you must create a new clustered NFS service on the Red Hat Ceph Storage cluster. This service replaces the standalone, Pacemaker-controlled ceph-nfs service that you use in Red Hat OpenStack Platform (RHOSP) 17.1.

Procedure

  1. Identify the Red Hat Ceph Storage nodes to deploy the new clustered NFS service, for example, cephstorage-0, cephstorage-1, cephstorage-2.

    Note

    You must deploy this service on the StorageNFS isolated network so that you can mount your existing shares through the new NFS export locations. You can deploy the new clustered NFS service on your existing CephStorage nodes or HCI nodes, or on new hardware that you enrolled in the Red Hat Ceph Storage cluster.

  2. If you deployed your Red Hat Ceph Storage nodes with director, propagate the StorageNFS network to the target nodes where the ceph-nfs service is deployed.

    Note

    If the target nodes are not managed by director, you cannot use this procedure to configure the network. An administrator must manually configure all the required networks.

    1. Identify the node definition file, overcloud-baremetal-deploy.yaml, that is used in the RHOSP environment. For more information about identifying the overcloud-baremetal-deploy.yaml file, see Customizing overcloud networks in Customizing the Red Hat OpenStack Services on OpenShift deployment.
    2. Edit the networks that are associated with the Red Hat Ceph Storage nodes to include the StorageNFS network:

      - name: CephStorage
        count: 3
        hostname_format: cephstorage-%index%
        instances:
        - hostname: cephstorage-0
          name: ceph-0
        - hostname: cephstorage-1
          name: ceph-1
        - hostname: cephstorage-2
          name: ceph-2
        defaults:
          profile: ceph-storage
          network_config:
            template: /home/stack/network/nic-configs/ceph-storage.j2
            network_config_update: true
          networks:
          - network: ctlplane
            vif: true
          - network: storage
          - network: storage_mgmt
          - network: storage_nfs
    3. Edit the network configuration template file, for example, /home/stack/network/nic-configs/ceph-storage.j2, for the Red Hat Ceph Storage nodes to include an interface that connects to the StorageNFS network:

      - type: vlan
        device: nic2
        vlan_id: {{ storage_nfs_vlan_id }}
        addresses:
        - ip_netmask: {{ storage_nfs_ip }}/{{ storage_nfs_cidr }}
        routes: {{ storage_nfs_host_routes }}
    4. Update the Red Hat Ceph Storage nodes:

      $ openstack overcloud node provision \
          --stack overcloud   \
          --network-config -y  \
          -o overcloud-baremetal-deployed-storage_nfs.yaml \
          --concurrency 2 \
          /home/stack/network/baremetal_deployment.yaml

      When the update is complete, ensure that a new interface is created in theRed Hat Ceph Storage nodes and that they are tagged with the VLAN that is associated with StorageNFS.

  3. Identify the IP address from the StorageNFS network to use as the Virtual IP address (VIP) for the Ceph NFS service:

    $ openstack port list -c "Fixed IP Addresses" --network storage_nfs
  4. In a running cephadm shell, identify the hosts for the NFS service:

    $ ceph orch host ls
  5. Label each host that you identified. Repeat this command for each host that you want to label:

    $ ceph orch host label add <hostname> nfs
    • Replace <hostname> with the name of the host that you identified.
  6. Create the NFS cluster:

    $ ceph nfs cluster create cephfs \
        "label:nfs" \
        --ingress \
        --virtual-ip=<VIP> \
        --ingress-mode=haproxy-protocol
  7. Check the status of the NFS cluster:

    $ ceph nfs cluster ls
    $ ceph nfs cluster info cephfs

To enable the high availability for Compute instances (Instance HA) service after you adopt the Red Hat OpenStack Services on OpenShift (RHOSO) 18.0 data plane, perform the following preparation tasks:

  • Create a fencing configuration file to use after you adopt the RHOSO data plane.
  • Prevent Pacemaker from monitoring or recovering the Compute nodes.

To maintain the high availability for Compute instances (Instance HA) functionality after you adopt Red Hat OpenStack Services on OpenShift 18.0, create a fencing configuration file to use in your adopted environment.

Procedure

  1. Gather the fencing information from the fencing.yaml file in your Red Hat OpenStack Platform (RHOSP) 17.1 cluster.
  2. Retrieve the RHOSP 17.1 stonith configuration from any of your overcloud Controller nodes:

    $ sudo pcs config
    Stonith Devices:
    ...
      Resource: stonith-fence_ipmilan-525400dde4f7 (class=stonith
          type=fence_ipmilan)
        Attributes: stonith-fence_ipmilan-525400dde4f7-instance_attributes
          delay=20
          ipaddr=172.16.0.1
          ipport=6231
          lanplus=true
          login=admin
          passwd=password
          pcmk_host_list=compute-1
        Operations:
          monitor: stonith-fence_ipmilan-525400dde4f7-monitor-interval-60s
            interval=60s
      Resource: stonith-fence_ipmilan-525400819ad3 (class=stonith
          type=fence_ipmilan)
        Attributes: stonith-fence_ipmilan-525400819ad3-instance_attributes
          delay=20
          ipaddr=172.16.0.1
          ipport=6230
          lanplus=true
          login=admin
          passwd=password
          pcmk_host_list=compute-0
        Operations:
          monitor: stonith-fence_ipmilan-525400819ad3-monitor-interval-60s
            interval=60s
    ...
  3. Generate the fencing configuration file:

You must disable Pacemaker so that it does not monitor your Compute nodes during the adoption. For example, if a network issue occurs during the adoption, Pacemaker attempts to reboot the Compute nodes to recover them, which breaks the adoption.

Procedure

  1. Retrieve the names of the Compute remote resources:

    $ sudo pcs stonith |grep -B1 stonith-fence_compute-fence-nova |grep Target |awk -F ': ' '{print $2}'
  2. Disable the stonith and pacemaker_remote resources on each Compute remote resource:

    $ sudo pcs property set stonith-enabled=false
    $ sudo pcs resource disable <compute_remote_resource>

    where:

    <compute_remote_resource>
    Specifies the name of the Compute remote resource in your environment.
  3. Retrieve the name of the Compute stonith resources:

    $ sudo pcs stonith |grep Level |grep fence_compute |awk '{print $4}' |awk -F ',' '{print $1}' |sort |uniq
  4. Remove the Compute node pacemaker_remote and fencing resources:

    $ sudo pcs stonith disable stonith-fence_compute-fence-nova
    $ sudo pcs stonith disable <compute_stonith_resource>
    $ sudo pcs stonith delete <compute_stonith_resource>
    $ sudo pcs resource delete <compute_remote_resource>
    $ sudo pcs resource disable compute-unfence-trigger-clone
    $ sudo pcs resource delete compute-unfence-trigger-clone
    $ sudo pcs resource disable nova-evacuate
    $ sudo pcs resource delete nova-evacuate

    where:

    <compute_stonith_resource>
    Specifies the name of the Compute stonith resource in your environment.

To help you manage the configuration for your director and Red Hat OpenStack Platform (RHOSP) services, you can compare the configuration files between your director deployment and the Red Hat OpenStack Services on OpenShift (RHOSO) cloud by using the os-diff tool.

Prerequisites

  • Golang is installed and configured on your environment:

    dnf install -y golang-github-openstack-k8s-operators-os-diff

Procedure

  1. Configure the /etc/os-diff/os-diff.cfg file and the /etc/os-diff/ssh.config file according to your environment. To allow os-diff to connect to your clouds and pull files from the services that you describe in the config.yaml file, you must set the following options in the os-diff.cfg file:

    [Default]
    
    local_config_dir=/tmp/
    service_config_file=config.yaml
    
    [Tripleo]
    
    ssh_cmd=ssh -F ssh.config
    director_host=standalone
    container_engine=podman
    connection=ssh
    remote_config_path=/tmp/tripleo
    local_config_path=/tmp/
    
    [Openshift]
    
    ocp_local_config_path=/tmp/ocp
    connection=local
    ssh_cmd=""
    • ssh_cmd=ssh -F ssh.config instructs os-diff to access your director host through SSH. The default value is ssh -F ssh.config. However, you can set the value without an ssh.config file, for example, ssh -i /home/user/.ssh/id_rsa stack@my.undercloud.local.
    • director_host=standalone specifies the host to use to access your cloud, and the podman/docker binary is installed and allowed to interact with the running containers. You can leave this key blank.
  2. If you use a host file to connect to your cloud, configure the ssh.config file to allow os-diff to access your RHOSP environment, for example:

    Host *
        IdentitiesOnly yes
    
    Host virthost
        Hostname virthost
        IdentityFile ~/.ssh/id_rsa
        User root
        StrictHostKeyChecking no
        UserKnownHostsFile=/dev/null
    
    
    Host standalone
        Hostname standalone
        IdentityFile <path to SSH key>
        User root
        StrictHostKeyChecking no
        UserKnownHostsFile=/dev/null
    
    Host crc
        Hostname crc
        IdentityFile ~/.ssh/id_rsa
        User stack
        StrictHostKeyChecking no
        UserKnownHostsFile=/dev/null
    • Replace <path to SSH key> with the path to your SSH key. You must provide a value for IdentityFile to get full working access to your RHOSP environment.
  3. If you use an inventory file to connect to your cloud, generate the ssh.config file from your Ansible inventory, for example, tripleo-ansible-inventory.yaml file:

    $ os-diff configure -i tripleo-ansible-inventory.yaml -o ssh.config --yaml

Verification

  • Test your connection:

    $ ssh -F ssh.config standalone

When you use the oc patch command to modify a resource, the changes are applied directly to the live object in your OpenShift cluster. If you later edit the custom resource (CR) file for the resource and apply the updates by using oc apply -f <filename>, your previous patched changes are overwritten and lost from the resource.

To prevent loss of configuration, you can use the --patch-file option to configure the patch and retain patch files. Alternatively, you can export your openstackcontrolplane CR after the patch is applied:

$ oc get <resource_type> <resource_name> -o yaml > <filename>.yaml

For example:

$ oc get OpenStackControlPlane openstack-control-plane -o yaml > openstack_control_plane.yaml
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top