Chapter 1. Creating a RHOSO environment with distributed zones


You can deploy the Red Hat OpenStack Services on OpenShift (RHOSO) environment across distributed zones. Distributed zones are failure domains that are located in distributed low-latency L3-connected racks, rows, rooms, and data centers. You can deploy the RHOSO control plane across multiple Red Hat OpenShift Container Platform (RHOCP) cluster nodes that are located in the distributed zones, and you can deploy the RHOSO data plane across the same distributed zones.

RHOSO distributed zones architecture

A RHOSO environment with distributed zones is built on a routed spine-leaf network topology. The topology of a distributed control plane environment includes three RHOCP zones. Each zone has at least one worker node that hosts the control plane services and one Compute node.

To create a RHOSO environment with distributed zones, you must complete the following tasks:

  1. Install OpenStack Operator (openstack-operator) on an operational RHOCP cluster.
  2. Provide secure access to the RHOSO services.
  3. Create and configure the control plane network for dynamic routing with border gateway protocol (BGP).
  4. Create and configure the data plane networks for dynamic routing with BGP.
  5. Create the distributed control plane for your environment.
  6. Create and configure the distributed data plane nodes.

You perform the control plane installation tasks and all data plane creation tasks on a workstation that has access to the RHOCP cluster.

Note

You cannot use the provisioning network in a routed spine-leaf network environment. You must configure provisioning to use the RHOCP machine network. The machine network is the network used by RHOCP cluster nodes to communicate with each other. The machine network is also the subnet that includes the API and Ingress VIPs. You configure the machine network by specifying the IP address blocks for the nodes that form the cluster in the machineNetwork field of the RHOCP install-config.yaml file. For more details about the RHOCP machine network, see the following RHOCP resources:

To plan and prepare to deploy a distributed zone environment, you must understand the requirements and limitations for Red Hat OpenShift Container Platform (RHOCP) clusters that span multiple sites. For more information, see Guidance for Red Hat OpenShift Container Platform Clusters - Deployments Spanning Multiple Sites (Data Centers/Regions).

1.1.1. RHOCP requirements

Your RHOCP cluster must comply with the minimum RHOCP hardware, network, software and storage requirements that are detailed in Planning your deployment. In addition, to host a distributed zone environment, your RHOCP cluster must comply with the following requirements:

  • The RHOCP cluster must not be a compact cluster.
  • Each zone requires a low-latency interconnect:

    • Etcd for RHOCP requires a Round Trip Time (RTT) less than 15ms.
  • The network equipment must support the BGP protocol and be compatible with FRRouting (FRR).
  • The MetalLB Operator is configured to integrate with FRR-K8s. For more information, see Configuring the integration of MetalLB and FRR-K8s.
  • The following Operators are installed on the RHOCP cluster:

    • The Self Node Remediation (SNR) Operator. For information, see Using Self Node remediation in the Workload Availability for Red Hat OpenShift Remediation, fencing, and maintenance guide.
    • The Node Health Check Operator. For information, see Remediating Nodes with Node Health Checks in the Workload Availability for Red Hat OpenShift Remediation, fencing, and maintenance guide.

1.1.2. Storage requirements

  • The RHOCP storage class is defined, and has access to persistent volumes of type ReadWriteOnce.

    Note

    If you use Logical Volume Manager (LVM) storage, the attached volume is not mounted on a new node in the event of a node failure. LVM storage only provides local volumes, and the volume remains assigned to the failed node. This prevents the SNR Operator from automatically rescheduling pods with LVMS PVCs. Therefore, if you use LVMs for storage, you must detach volumes after non-graceful node shutdown. For more information, see Detach volumes after non-graceful node shutdown.

  • For Red Hat Ceph Storage, a redundant Red Hat Ceph Storage cluster is available in each zone.
  • For third-party storage, local and remote storage array access is configured.
Important

This configuration is Technology Preview when the following storage protocols are used with the Block Storage service, and therefore is not fully supported by Red Hat. It should be used only for testing, and should not be deployed in a production environment:

  • Fibre Channel
  • NFS

For more information, see Technology Preview.

Local access storage configuration

Storage services are co-located with their storage arrays in the same availability zone (AZ).

  • Each AZ contains its own storage array and dedicated storage network.
  • Service pods, such as cinder-volume or manila-share, are deployed on worker nodes within the same AZ as their target storage array.
  • Compute nodes must be on the same storage network to access local storage resources.

    AZ1 setup example

    • Storage array: 10.1.0.6.
    • Storage network: 10.1.0.0/24.
    • The manila-share pod is deployed on the AZ1 worker node with access to 10.1.0.0/24.
    • Compute nodes are connected to the same network for direct array access.
Remote access storage configuration

Storage arrays in each zone must be accessible from worker nodes in other zones to enable cross-AZ operations, such as image management or volume operations.

Network implementation requirements

  • iSCSI: Configure IP routing between AZ storage networks to enable remote access.
  • Fibre Channel: Configure FC switch zoning to allow cross-AZ access and maintain the same local and remote access patterns as iSCSI.

Use cases

  • Image management example: The Image service (glance) pod in AZ1 requires access to cinder-volume services across AZs so that you can upload images to the local glance store in AZ1 and copy the images to remote glance stores in AZ2 or AZ3.

    Note

    When you use the Block Storage service as a back end for the Image service, volume creation from images can be optimized within each zone’s storage pool. The system uses back-end-assisted cloning instead of downloading image data, which significantly improves performance for boot-from-volume instances and volume creation. This optimization works when the image volume and destination volume are in the same storage pool. For cross-zone operations where volumes are created in different pools, the system uses the traditional download method. For more information, see Volume-from-image optimization with Block Storage back ends.

  • Volume operations example: Retype volumes between different AZs.
  • Cross-AZ share access example: Grant Compute service (nova) instances in AZ2 access to a Shared File Systems service (manila) share hosted in AZ1. Because network latency between AZs might impact storage performance, administrator policy determines whether to restrict access to the local AZ only for better performance or allow remote AZ access for greater flexibility.
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top