Chapter 1. Red Hat OpenStack Services on OpenShift 18.0 adoption overview
Adoption is the process of migrating a Red Hat OpenStack Platform (RHOSP) 17.1 control plane to Red Hat OpenStack Services on OpenShift 18.0, and then completing an in-place upgrade of the data plane. You can retain existing infrastructure investments and modernize your RHOSP deployment on a containerized Red Hat OpenShift Container Platform (RHOCP) foundation. To ensure that you understand the entire adoption process and how to sufficiently prepare your RHOSP environment, review the prerequisites, adoption process, and post-adoption tasks.
Read the whole adoption guide before you start the adoption to ensure that you understand the procedure. Prepare the necessary configuration snippets for each RHOSP service in advance, and test the migration in a representative test environment before you apply it to production.
1.1. Adoption limitations Copy linkLink copied to clipboard!
Before you proceed with the adoption, check which features are Technology Previews or unsupported.
- Technology Preview
The following features are Technology Previews and have not been tested within the context of the Red Hat OpenStack Services on OpenShift (RHOSO) adoption:
- Key Manager service (barbican) adoption with Proteccio hardware security module (HSM) integration
DNS-as-a-service (designate)
The following Compute service (nova) features are Technology Previews:
- NUMA-aware vswitches
- PCI passthrough by flavor
- SR-IOV trusted virtual functions
- vGPU
- Emulated virtual Trusted Platform Module (vTPM)
- UEFI
- AMD SEV
- Direct download from Rados Block Device (RBD)
- File-backed memory
-
Defining a custom inventory of resources in a YAML file,
provider.yaml
- Unsupported features
The adoption process does not support the following features:
- Adopting Border Gateway Protocol (BGP) environments to the RHOSO data plane
- Adopting a Federal Information Processing Standards (FIPS) environment
1.2. Adoption prerequisites Copy linkLink copied to clipboard!
Before you begin the adoption procedure, complete the following prerequisites:
- Planning information
- Review the Adoption limitations.
- Review the Red Hat OpenShift Container Platform (RHOCP) requirements, data plane node requirements, Compute node requirements, and so on. For more information, see Planning your deployment.
- Review the adoption-specific networking requirements. For more information, see Configuring the network for the RHOSO deployment.
- Review the adoption-specific storage requirements. For more information, see Storage requirements.
- Review how to customize your deployed control plane with the services that are required for your environment. For more information, see Customizing the Red Hat OpenStack Services on OpenShift deployment.
Familiarize yourself with the following RHOCP concepts that are used during adoption:
- Familiarize yourself with mapping RHOSO versions to OpenStack Operators and OpenStackVersion custom resources (CRs). For more information, see the Red Hat Knowledgebase article How RHOSO versions map to OpenStack Operators and OpenStackVersion CRs.
- Back-up information
Back up your Red Hat OpenStack Platform (RHOSP) 17.1 environment by using one of the following options:
- The Relax-and-Recover tool. For more information, see Backing up the undercloud and the control plane nodes by using the Relax-and-Recover tool in Backing up and restoring the undercloud and control plane nodes.
- The Snapshot and Revert tool. For more information, see Backing up your Red Hat OpenStack Platform cluster by using the Snapshot and Revert tool in Backing up and restoring the undercloud and control plane nodes.
- A third-party backup and recovery tool. For more information about certified backup and recovery tools, see the Red Hat Ecosystem Catalog.
- Back up the configuration files from the RHOSP services and director on your file system. For more information, see Pulling the configuration from a director deployment.
- Compute
- Upgrade your Compute nodes to Red Hat Enterprise Linux 9.2. For more information, see Upgrading all Compute nodes to RHEL 9.2 in Framework for upgrades (16.2 to 17.1).
-
On your Compute hosts, the
systemd-containerpackage must be installed and thesystemd-machinedservice must be running. For more information about how to verify that the package is installed and that the service is running, see Installing thesystemd-containerpackage on Compute hosts.
- ML2/OVS
- If you use the Modular Layer 2 plug-in with Open vSwitch mechanism driver (ML2/OVS), migrate it to the Modular Layer 2 plug-in with Open Virtual Networking (ML2/OVN) mechanism driver. For more information, see Migrating to the OVN mechanism driver.
- Tools
- The oc and podman command line tools are installed on your workstation.
Make sure to set the correct RHOSO project namespace in which to run commands.
$ oc project openstack
- RHOSP 17.1 release
- The RHOSP 17.1 cloud is updated to the 17.1.4 release or later. For more information, see Performing a minor update of Red Hat OpenStack Platform.
- RHOSP 17.1 hosts
- All control plane and data plane hosts of the RHOSP 17.1 cloud are up and running, and continue to run throughout the adoption procedure.
1.3. Guidelines for planning the adoption Copy linkLink copied to clipboard!
When planning to adopt a Red Hat OpenStack Services on OpenShift (RHOSO) 18.0 environment, consider the scope of the change. An adoption is similar in scope to a data center upgrade. Different firmware levels, hardware vendors, hardware profiles, networking interfaces, storage interfaces, and so on affect the adoption process and can cause changes in behavior during the adoption.
Review the following guidelines to adequately plan for the adoption and increase the chance that you complete the adoption successfully:
All commands in the adoption documentation are examples. Do not copy and paste the commands without understanding what the commands do.
- To minimize the risk of an adoption failure, reduce the number of environmental differences between the staging environment and the production sites.
- If the staging environment is not representative of the production sites or if a staging environment is not available, you must plan to include contingency time in case the adoption fails.
Review your custom Red Hat OpenStack Platform (RHOSP) service configuration at every major release.
- Every major release upgrades through multiple OpenStack releases.
- Each major release might deprecate configuration options or change the format of the configuration.
- Prepare a Method of Procedure (MOP) that is specific to your environment to reduce the risk of variance or omitted steps when running the adoption process.
You can use representative hardware in a staging environment to prepare a MOP and validate any content changes.
- Include a cross-section of firmware versions, additional interface or device hardware, and any additional software in the representative staging environment to ensure that it is broadly representative of the variety that is present in the production environments.
- Ensure that you validate any Red Hat Enterprise Linux update or upgrade in the representative staging environment.
- Use Satellite for localized and version-pinned RPM content where your data plane nodes are located.
- In the production environment, use the content that you tested in the staging environment.
1.4. Adoption process overview Copy linkLink copied to clipboard!
Familiarize yourself with the steps of the adoption process.
- Main adoption process
- Migrate TLS everywhere (TLS-e) to the Red Hat OpenStack Services on OpenShift (RHOSO) deployment.
- Migrate your existing databases to the new control plane.
- Adopt your Red Hat OpenStack Platform 17.1 control plane services to the new RHOSO 18.0 deployment.
- Adopt the RHOSO 18.0 data plane.
- Migrate the Object Storage service (swift) to the RHOSO nodes.
- Distributed Compute Node (DCN) architecture process
- Overview of Distributed Compute Node adoption
- Configuring spine-leaf networks for the Red Hat OpenStack Services on OpenShift deployment
- Configuring control plane networking for spine-leaf topologies
Configuring data plane node sets for DCN sites
If you use a DCN architecture with storage, the following additional steps apply, depending on the services that are included in your deployment:
- Adopting the Image service with multiple Red Hat Ceph Storage back ends (DCN)
- Adopt the Block Storage service service with multiple Red Hat Ceph Storage back ends (DCN)
- Adopting Compute services with multiple Red Hat Ceph Storage back ends (DCN)
- Red Hat Ceph Storage migration for Distributed Compute Node deployments
- Post-adoption tasks
- For more details on the tasks you must perform after completing the adoption, see Post-adoption tasks.
1.5. Adoption duration and impact Copy linkLink copied to clipboard!
The durations in the following table were recorded in a test environment that consisted of 228 Compute nodes and 3 Networker nodes. To accurately estimate the adoption duration for each task, perform these procedures in a test environment with hardware that is similar to your production environment. Ensure that you set up the Red Hat OpenShift Container Platform (RHOCP) (RHOCP) environment and install the Operators before testing.
Durations can vary significantly based on the content of your environment, for example, the size of your service databases or the number of services. The durations represent raw execution time. They do not include human operator activity.
| Adoption stage | Duration | Notes |
|---|---|---|
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
| Scenario | Notes |
|---|---|
| Migrate a 17.1 OVN gateway on the control plane to a RHOCP-hosted OVN gateway | Possible L3 downtime due to the migration of the traffic path to new hosts |
| Migrate a 17.1 OVN gateway on the control plane to an 18.0 data plane Networker node | No L2/L3 data plane connectivity loss because the traffic path remains unchanged |
| Migrate a 17.1 OVN gateway on a Networker node to a 18.0 data plane Networker node | No L2/L3 data plane connectivity loss because the traffic path remains unchanged |
| L3 handled through provider networks | No L2/L3 data plane connectivity loss because the traffic path remains unchanged |
1.6. Overview of Distributed Compute Node adoption Copy linkLink copied to clipboard!
The process to adopt Distributed Compute Node (DCN) deployment from Red Hat OpenStack Platform (RHOSP) to Red Hat OpenStack Services on OpenShift (RHOSO) requires additional adoption tasks:
- You must map a multi-stack deployment to multiple node sets.
You must map additional networking configurations.
- Multi-stack to multi-node set mapping
In director deployments, DCN environments use multiple Heat stacks:
- The Central stack is templating for Controllers and central Compute nodes.
An edge stack is templating for Edge Compute nodes in a stack. There is one stack per DCN site.
When you perform an adoption, map director stacks to
OpenStackDataPlaneNodeSetcustom resources (CRs):Expand Table 1.3. Mapping director stacks to RHOSO nodesets director stack RHOSO nodeset Availability zone Central stack (Compute role)
openstack-edpmoropenstack-cell1az-central
DCN1 stack (ComputeDcn1 role)
openstack-edpm-dcn1oropenstack-cell1-dcn1az-dcn1
DCN2 stack (ComputeDcn2 role)
openstack-edpm-dcn2oropenstack-cell1-dcn2az-dcn2
NoteKeep all node sets in the same Nova cell to maintain unified scheduling through a shared cell. The default cell is
cell1.
- Key differences from standard adoption
The following table summarizes the differences between standard adoption and DCN adoption:
Expand Table 1.4. Comparison of standard and DCN adoption Aspect Standard adoption DCN adoption Director stacks
Single stack
Multiple stacks (central + edge sites)
Network topology
Flat L2 networks
Routed L3 networks with multiple subnets
Data plane node sets
Single node set
Multiple node sets (one per site minimum)
Network routes
Usually not required
Required for inter-site connectivity
Physnets
Single physnet (e.g.,
datacentre)Multiple physnets (e.g.,
leaf0,leaf1,leaf2)Availability zones
Often single AZ
Multiple AZs (one per site)
OVN bridge mappings
Single mapping
Site-specific mappings
Provider networks
Single segment
Multi-segment routed provider networks
- Requirements for DCN adoption
Before adopting a DCN deployment, ensure you have:
- Network topology information for all sites (IP ranges, VLANs, gateways)
- Inter-site routing configuration (routes between site subnets)
- Mapping of director roles to availability zones
- OVN bridge mapping configuration for each site
The adoption of the control plane must complete before adopting any data plane nodes. However, once the control plane is adopted, the edge site data plane adoptions can proceed in parallel with the central site data plane adoption.
- DCN Adoption workflow overview
The adoption of a Distributed Compute Node (DCN) deployment from Red Hat OpenStack Platform (RHOSP) to Red Hat OpenStack Services on OpenShift (RHOSO)
- Control plane adoption: Adopt all control plane services from the central director stack to the RHOSO control plane. This is identical to standard adoption.
-
Network configuration: Configure multi-subnet
NetConfigandNetworkAttachmentDefinitionCRs to support all site networks. Data plane node set creation: Create separate
OpenStackDataPlaneNodeSetCRs for each site, each with site-specific network configurations:- Network subnet references
- OVN bridge mappings (physnets)
- Inter-site routing configuration
- Data plane deployment: Deploy all node sets. The edge site node sets can be deployed in parallel after the central site control plane is adopted.
1.7. Installing the systemd-container package on Compute hosts Copy linkLink copied to clipboard!
Before you adopt the Red Hat OpenStack Services on OpenShift (RHOSO) data plane, you must verify that the systemd-container package is installed and that systemd-machined is running on all the Compute hosts. You must install the systemd-container package on each Compute host that does not have this package.
Procedure
- Log in to the Compute node host as a user with the appropriate permissions.
List the instances that are running on the host:
$ sudo machinectl list- Sample output
MACHINE CLASS SERVICE OS VERSION ADDRESSES qemu-1-instance-000000b9 vm libvirt-qemu - - - qemu-2-instance-000000c2 vm libvirt-qemu - - - 2 machines listed.
Verify that the
systemd-machinedservice is running:$ sudo systemctl status systemd-machined.service- Sample output
systemd-machined.service - Virtual Machine and Container Registration Service Loaded: loaded (/usr/lib/systemd/system/systemd-machined.service; static) Active: active (running) since Mon 2025-06-16 11:42:07 EDT; 2min 48s ago Docs: man:systemd-machined.service(8) man:org.freedesktop.machine1(5) Main PID: 136614 (systemd-machine) Status: "Processing requests..." Tasks: 1 (limit: 838860) Memory: 1.4M CPU: 33ms CGroup: /system.slice/systemd-machined.service └─136614 /usr/lib/systemd/systemd-machined Jun 16 11:42:07 computehost001 systemd[1]: Starting Virtual Machine and Container Registration Service... Jun 16 11:42:07 computehost001 systemd[1]: Started Virtual Machine and Container Registration Service. Jun 16 11:43:44 computehost001 systemd-machined[136614]: New machine qemu-1-instance-000000b9. Jun 16 11:43:51 computehost001 systemd-machined[136614]: New machine qemu-2-instance-000000c2.ImportantIf the
systemd-machinedservice is running, skip the rest of this procedure. Ensure that you verify that thesystemd-machinedservice is running each Compute node host in the cluster.
-
If the
systemd-machinedservice is not running, before you can install thesystemd-containerpackage, live migrate all virtual machines from the host. For more information about live migration, see Rebooting Compute nodes in Performing a minor update of Red Hat OpenStack Platform. Install the
systemd-containeron the host:-
If you upgraded your environment from an earlier version of Red Hat OpenStack Platform, reboot the Compute host to automatically install the
systemd-container. If you deployed a new RHOSO environment, install the
systemd-containermanually by using the following command. Rebooting the Compute host is not required:$ sudo dnf -y install systemd-containerNoteIf your Compute host is not running a virtual machine, you can install the
systemd-containerautomatically or manually.
-
If you upgraded your environment from an earlier version of Red Hat OpenStack Platform, reboot the Compute host to automatically install the
-
Repeat this procedure on each Compute host in the cluster where the
systemd-machinedservice is not running.
1.8. Identity service authentication Copy linkLink copied to clipboard!
If you have custom policies enabled, complete the following steps for adoption:
- Remove custom policies.
- Run the adoption.
- Re-add custom policies by using the new SRBAC syntax.
Red Hat does not support customized roles or policies. Syntax errors or misapplied authorization can negatively impact security or usability. If you need customized roles or policies in your production environment, contact Red Hat support for a support exception before you begin the adoption.
After you adopt a director-based OpenStack deployment to a Red Hat OpenStack Services on OpenShift deployment, the Identity service performs user authentication and authorization by using Secure RBAC (SRBAC). If SRBAC is already enabled, then there is no change to how you perform operations. If SRBAC is disabled, then adopting a director-based OpenStack deployment might change how you perform operations due to changes in API access policies.
1.9. Configuring the network for the Red Hat OpenStack Services on OpenShift deployment Copy linkLink copied to clipboard!
When you adopt a new Red Hat OpenStack Services on OpenShift (RHOSO) deployment, you must align the network configuration with the adopted cluster to maintain connectivity for existing workloads.
Perform the following tasks to incorporate the existing network configuration:
- Configure Red Hat OpenShift Container Platform (RHOCP) worker nodes to align VLAN tags and IP Address Management (IPAM) configuration with the existing deployment.
- Configure control plane services to use compatible IP ranges for service and load-balancing IP addresses.
- Configure data plane nodes to use corresponding compatible configuration for VLAN tags and IPAM.
When configuring nodes and services, the general approach is as follows:
- For IPAM, you can either reuse subnet ranges from the existing deployment or, if there is a shortage of free IP addresses in existing subnets, define new ranges for the new control plane services. If you define new ranges, you configure IP routing between the old and new ranges.
- For VLAN tags, always reuse the configuration from the existing deployment.
1.9.1. Retrieving the network configuration from your existing deployment Copy linkLink copied to clipboard!
You must determine which isolated networks are defined in your existing deployment. After you retrieve your network configuration, you have the following information:
- A list of isolated networks that are used in the existing deployment.
- For each of the isolated networks, the VLAN tag and IP ranges used for dynamic address allocation.
- A list of existing IP address allocations that are used in the environment. When reusing the existing subnet ranges to host the new control plane services, these addresses are excluded from the corresponding allocation pools.
Procedure
Find the network configuration in the
network_data.yamlfile. For example:- name: InternalApi mtu: 1500 vip: true vlan: 20 name_lower: internal_api dns_domain: internal.mydomain.tld. service_net_map_replace: internal subnets: internal_api_subnet: ip_subnet: '172.17.0.0/24' allocation_pools: [{'start': '172.17.0.4', 'end': '172.17.0.250'}]-
Retrieve the VLAN tag that is used in the
vlankey and the IP range in theip_subnetkey for each isolated network from thenetwork_data.yamlfile. When reusing subnet ranges from the existing deployment for the new control plane services, the ranges are split into separate pools for control plane services and load-balancer IP addresses. Use the
tripleo-ansible-inventory.yamlfile to determine the list of IP addresses that are already consumed in the adopted environment. For each listed host in the file, make a note of the IP and VIP addresses that are consumed by the node. For example:Standalone: hosts: standalone: ... internal_api_ip: 172.17.0.100 ... ... standalone: children: Standalone: {} vars: ... internal_api_vip: 172.17.0.2 ...NoteIn this example, the
172.17.0.2and172.17.0.100values are consumed and are not available for the new control plane services until the adoption is complete.- Repeat this procedure for each isolated network and each host in the configuration.
1.9.2. Planning your IPAM configuration Copy linkLink copied to clipboard!
In a Red Hat OpenStack Services on OpenShift (RHOSO) deployment, each service that is deployed on the Red Hat OpenShift Container Platform (RHOCP) worker nodes requires an IP address from the IP Address Management (IPAM) pool. In a Red Hat OpenStack Platform (RHOSP) deployment, all services that are hosted on a Controller node share the same IP address.
The RHOSO control plane has different requirements for the number of IP addresses that are made available for services. Depending on the size of the IP ranges that are used in the existing RHOSO deployment, you might reuse these ranges for the RHOSO control plane.
The total number of IP addresses that are required for the new control plane services in each isolated network is calculated as the sum of the following:
-
The number of RHOCP worker nodes. Each worker node requires 1 IP address in the
NodeNetworkConfigurationPolicycustom resource (CR). -
The number of IP addresses required for the data plane nodes. Each node requires an IP address from the
NetConfigCRs. -
The number of IP addresses required for control plane services. Each service requires an IP address from the
NetworkAttachmentDefinitionCRs. This number depends on the number of replicas for each service. -
The number of IP addresses required for load balancer IP addresses. Each service requires a Virtual IP address from the
IPAddressPoolCRs.
For example, a simple single worker node RHOCP deployment with Red Hat OpenShift Local has the following IP ranges defined for the internalapi network:
- 1 IP address for the single worker node
- 1 IP address for the data plane node
-
NetworkAttachmentDefinitionCRs for control plane services:X.X.X.30-X.X.X.70(41 addresses) -
IPAllocationPoolCRs for load balancer IPs:X.X.X.80-X.X.X.90(11 addresses)
This example shows a total of 54 IP addresses allocated to the internalapi allocation pools.
The requirements might differ depending on the list of RHOSP services to be deployed, their replica numbers, and the number of RHOCP worker nodes and data plane nodes.
Additional IP addresses might be required in future RHOSP releases, so you must plan for some extra capacity for each of the allocation pools that are used in the new environment.
After you determine the required IP pool size for the new deployment, you can choose to define new IP address ranges or reuse your existing IP address ranges. Regardless of the scenario, the VLAN tags in the existing deployment are reused in the new deployment. Ensure that the VLAN tags are properly retained in the new configuration.
1.9.2.1. Configuring new subnet ranges Copy linkLink copied to clipboard!
If you are using IPv6, you can reuse existing subnet ranges in most cases. For more information about existing subnet ranges, see Reusing existing subnet ranges.
You can define new IP ranges for control plane services that belong to a different subnet that is not used in the existing cluster. Then you configure link local IP routing between the existing and new subnets to enable existing and new service deployments to communicate. This involves using the director mechanism on a pre-adopted cluster to configure additional link local routes. This enables the data plane deployment to reach out to Red Hat OpenStack Platform (RHOSP) nodes by using the existing subnet addresses. You can use new subnet ranges with any existing subnet configuration, and when the existing cluster subnet ranges do not have enough free IP addresses for the new control plane services.
You must size the new subnet appropriately to accommodate the new control plane services. There are no specific requirements for the existing deployment allocation pools that are already consumed by the RHOSP environment.
Defining a new subnet for Storage and Storage management is not supported because Compute service (nova) and Red Hat Ceph Storage do not allow modifying those networks during adoption.
In the following procedure, you configure NetworkAttachmentDefinition custom resources (CRs) to use a different subnet from what is configured in the network_config section of the OpenStackDataPlaneNodeSet CR for the same networks. The new range in the NetworkAttachmentDefinition CR is used for control plane services, while the existing range in the OpenStackDataPlaneNodeSet CR is used to manage IP Address Management (IPAM) for data plane nodes.
The values that are used in the following procedure are examples. Use values that are specific to your configuration.
Procedure
Configure link local routes on the existing deployment nodes for the control plane subnets. This is done through director configuration:
network_config: - type: ovs_bridge name: br-ctlplane routes: - ip_netmask: 0.0.0.0/0 next_hop: 192.168.1.1 - ip_netmask: 172.31.0.0/24 next_hop: 192.168.1.100-
ip_netmaskdefines the new control plane subnet. next_hopdefines the control plane IP address of the existing data plane node.Repeat this configuration for other networks that need to use different subnets for the new and existing parts of the deployment.
-
Apply the new configuration to every RHOSP node:
(undercloud)$ openstack overcloud network provision \ --output <deployment_file> \ [--templates <templates_directory>]/home/stack/templates/<networks_definition_file>(undercloud)$ openstack overcloud node provision \ --stack <stack> \ --network-config \ --output <deployment_file> \ [--templates <templates_directory>]/home/stack/templates/<node_definition_file>-
Optional: Include the
--templatesoption to use your own templates instead of the default templates located in/usr/share/openstack-tripleo-heat-templates. Replace<templates_directory>with the path to the directory that contains your templates. -
Replace
<stack>with the name of the stack for which the bare-metal nodes are provisioned. If not specified, the default isovercloud. -
Include the
--network-configoptional argument to provide the network definitions to thecli-overcloud-node-network-config.yamlAnsible playbook. Thecli-overcloud-node-network-config.yamlplaybook uses theos-net-configtool to apply the network configuration on the deployed nodes. If you do not use--network-configto provide the network definitions, then you must configure the{{role.name}}NetworkConfigTemplateparameters in yournetwork-environment.yamlfile, otherwise the default network definitions are used. -
Replace
<deployment_file>with the name of the heat environment file to generate for inclusion in the deployment command, for example/home/stack/templates/overcloud-baremetal-deployed.yaml. Replace
<node_definition_file>with the name of your node definition file, for example,overcloud-baremetal-deploy.yaml. Ensure that thenetwork_config_updatevariable is set totruein the node definition file.NoteNetwork configuration changes are not applied by default to avoid the risk of network disruption. You must enforce the changes by setting the
StandaloneNetworkConfigUpdate: truein the director configuration files.
-
Optional: Include the
Confirm that there are new link local routes to the new subnet on each node. For example:
# ip route | grep 172 172.31.0.0/24 via 192.168.122.100 dev br-ctlplaneYou also must configure link local routes to existing deployment on Red Hat OpenStack Services on OpenShift (RHOSO) worker nodes. This is achieved by adding
routesentries to theNodeNetworkConfigurationPolicyCRs for each network. For example:- destination: 192.168.122.0/24 next-hop-interface: ospbr-
destinationdefines the original subnet of the isolated network on the data plane. next-hop-interfacedefines the Red Hat OpenShift Container Platform (RHOCP) worker network interface that corresponds to the isolated network on the data plane.As a result, the following route is added to your RHOCP nodes:
# ip route | grep 192 192.168.122.0/24 dev ospbr proto static scope link
-
Later, during the data plane adoption, in the
network_configsection of theOpenStackDataPlaneNodeSetCR, add the same link local routes for the new control plane subnet ranges. For example:nodeTemplate: ansible: ansibleUser: root ansibleVars: additional_ctlplane_host_routes: - ip_netmask: 172.31.0.0/24 next_hop: '{{ ctlplane_ip }}' edpm_network_config_template: | network_config: - type: ovs_bridge routes: {{ ctlplane_host_routes + additional_ctlplane_host_routes }} ...List the IP addresses that are used for the data plane nodes in the existing deployment as
ansibleHostandfixedIP. For example:nodes: standalone: ansible: ansibleHost: 192.168.122.100 ansibleUser: "" hostName: standalone networks: - defaultRoute: true fixedIP: 192.168.122.100 name: ctlplane subnetName: subnet1ImportantDo not change RHOSP node IP addresses during the adoption process. List previously used IP addresses in the
fixedIPfields for each node entry in thenodessection of theOpenStackDataPlaneNodeSetCR.Expand the SSH range for the firewall configuration to include both subnets to allow SSH access to data plane nodes from both subnets:
edpm_sshd_allowed_ranges: - 192.168.122.0/24 - 172.31.0.0/24This provides SSH access from the new subnet to the RHOSP nodes as well as the RHOSP subnets.
1.9.2.2. Reusing existing subnet ranges Copy linkLink copied to clipboard!
You can reuse existing subnet ranges if they have enough IP addresses to allocate to the new control plane services. You configure the new control plane services to use the same subnet as you used in the Red Hat OpenStack Platform (RHOSP) environment, and configure the allocation pools that are used by the new services to exclude IP addresses that are already allocated to existing cluster nodes. By reusing existing subnets, you avoid additional link local route configuration between the existing and new subnets.
If your existing subnets do not have enough IP addresses in the existing subnet ranges for the new control plane services, you must create new subnet ranges.
No special routing configuration is required to reuse subnet ranges. However, you must ensure that the IP addresses that are consumed by RHOSP services do not overlap with the new allocation pools configured for Red Hat OpenStack Services on OpenShift control plane services.
If you are especially constrained by the size of the existing subnet, you may have to apply elaborate exclusion rules when defining allocation pools for the new control plane services.
1.9.3. Configuring isolated networks Copy linkLink copied to clipboard!
Before you begin replicating your existing VLAN and IPAM configuration in the Red Hat OpenStack Services on OpenShift (RHOSO) environment, you must have the following IP address allocations for the new control plane services:
-
1 IP address for each isolated network on each Red Hat OpenShift Container Platform (RHOCP) worker node. You configure these IP addresses in the
NodeNetworkConfigurationPolicycustom resources (CRs) for the RHOCP worker nodes. -
1 IP range for each isolated network for the data plane nodes. You configure these ranges in the
NetConfigCRs for the data plane nodes. -
1 IP range for each isolated network for control plane services. These ranges enable pod connectivity for isolated networks in the
NetworkAttachmentDefinitionCRs. -
1 IP range for each isolated network for load balancer IP addresses. These IP ranges define load balancer IP addresses for MetalLB in the
IPAddressPoolCRs.
The exact list and configuration of isolated networks in the following procedures should reflect the actual Red Hat OpenStack Platform environment. The number of isolated networks might differ from the examples used in the procedures. The IPAM scheme might also differ. Only the parts of the configuration that are relevant to configuring networks are shown. The values that are used in the following procedures are examples. Use values that are specific to your configuration.
1.9.3.1. Configuring isolated networks on RHOCP worker nodes Copy linkLink copied to clipboard!
To connect service pods to isolated networks on Red Hat OpenShift Container Platform (RHOCP) worker nodes that run Red Hat OpenStack Platform services, physical network configuration on the hypervisor is required.
This configuration is managed by the NMState operator, which uses NodeNetworkConfigurationPolicy custom resources (CRs) to define the desired network configuration for the nodes.
Procedure
For each RHOCP worker node, define a
NodeNetworkConfigurationPolicyCR that describes the desired network configuration. For example:apiVersion: v1 items: - apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy spec: desiredState: interfaces: - description: internalapi vlan interface ipv4: address: - ip: 172.17.0.10 prefix-length: 24 dhcp: false enabled: true ipv6: enabled: false name: enp6s0.20 state: up type: vlan vlan: base-iface: enp6s0 id: 20 reorder-headers: true - description: storage vlan interface ipv4: address: - ip: 172.18.0.10 prefix-length: 24 dhcp: false enabled: true ipv6: enabled: false name: enp6s0.21 state: up type: vlan vlan: base-iface: enp6s0 id: 21 reorder-headers: true - description: tenant vlan interface ipv4: address: - ip: 172.19.0.10 prefix-length: 24 dhcp: false enabled: true ipv6: enabled: false name: enp6s0.22 state: up type: vlan vlan: base-iface: enp6s0 id: 22 reorder-headers: true nodeSelector: kubernetes.io/hostname: ocp-worker-0 node-role.kubernetes.io/worker: ""NoteFor environments that are enabled with border gateway protocol (BGP), you might need to add additional routes in the
NodeNetworkConfigurationPolicyCR so that RHOCP worker nodes can reach the Red Hat OpenStack Platform Controller nodes and Compute nodes over the control plane and internal API networks.When you configure the RHOCP worker nodes network in the
NodeNetworkConfigurationPolicyCR, add routes for each of the following networks:-
External network (for example,
172.31.0.0/24) -
Control plane network (for example,
192.168.188.0/24) -
BGP main network (for example,
99.99.0.0/16)
The following example shows the
routes.configsection from aNodeNetworkConfigurationPolicyCR for a worker node with BGP configured. In this example,100.64.0.17and100.65.0.17are the IP addresses of the leaf switches that are connected to the specific RHOCP node:routes: config: - destination: 99.99.0.0/16 next-hop-address: 100.64.0.17 next-hop-interface: enp7s0 weight: 200 - destination: 99.99.0.0/16 next-hop-address: 100.65.0.17 next-hop-interface: enp8s0 weight: 200 - destination: 172.31.0.0/24 next-hop-address: 100.64.0.17 next-hop-interface: enp7s0 weight: 200 - destination: 172.31.0.0/24 next-hop-address: 100.65.0.17 next-hop-interface: enp8s0 weight: 200 - destination: 192.168.188.0/24 next-hop-address: 100.64.0.17 next-hop-interface: enp7s0 weight: 200 - destination: 192.168.188.0/24 next-hop-address: 100.65.0.17 next-hop-interface: enp8s0 weight: 200-
External network (for example,
1.9.3.2. Configuring isolated networks on control plane services Copy linkLink copied to clipboard!
After the NMState operator creates the desired hypervisor network configuration for isolated networks, you must configure the Red Hat OpenStack Platform (RHOSP) services to use the configured interfaces. You define a NetworkAttachmentDefinition custom resource (CR) for each isolated network. In some clusters, these CRs are managed by the Cluster Network Operator, in which case you use Network CRs instead. For more information, see Cluster Network Operator in Networking.
Procedure
Define a
NetworkAttachmentDefinitionCR for each isolated network. For example:apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: internalapi spec: config: | { "cniVersion": "0.3.1", "name": "internalapi", "type": "macvlan", "master": "enp6s0.20", "ipam": { "type": "whereabouts", "range": "172.17.0.0/24", "range_start": "172.17.0.20", "range_end": "172.17.0.50" } }ImportantEnsure that the interface name and IPAM range match the configuration that you used in the
NodeNetworkConfigurationPolicyCRs.Optional: When reusing existing IP ranges, you can exclude part of the range that is used in the existing deployment by using the
excludeparameter in theNetworkAttachmentDefinitionpool. For example:apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: internalapi spec: config: | { "cniVersion": "0.3.1", "name": "internalapi", "type": "macvlan", "master": "enp6s0.20", "ipam": { "type": "whereabouts", "range": "172.17.0.0/24", "range_start": "172.17.0.20", "range_end": "172.17.0.50", "exclude": [ "172.17.0.24/32", "172.17.0.44/31" ] } }-
spec.config.ipam.range_startdefines the start of the IP range. -
spec.config.ipam.range_enddefines the end of the IP range. -
spec.config.ipam.excludeexcludes part of the IP range. This example excludes IP addresses172.17.0.24/32and172.17.0.44/31from the allocation pool.
-
If your RHOSP services require load balancer IP addresses, define the pools for these services in an
IPAddressPoolCR. For example:NoteThe load balancer IP addresses belong to the same IP range as the control plane services, and are managed by MetalLB. This pool should also be aligned with the RHOSP configuration.
- apiVersion: metallb.io/v1beta1 kind: IPAddressPool spec: addresses: - 172.17.0.60-172.17.0.70Define
IPAddressPoolCRs for each isolated network that requires load balancer IP addresses.Optional: When reusing existing IP ranges, you can exclude part of the range by listing multiple entries in the
addressessection of theIPAddressPool. For example:- apiVersion: metallb.io/v1beta1 kind: IPAddressPool spec: addresses: - 172.17.0.60-172.17.0.64 - 172.17.0.66-172.17.0.70The example above would exclude the
172.17.0.65address from the allocation pool.For environments that are enabled with border gateway protocol (BGP), add routes to the
NetworkAttachmentDefinitionCRs so that the pods can communicate with the Red Hat OpenStack Platform Controller nodes and Compute nodes over the isolated networks. This is similar to the routes that should be added to theNodeNetworkConfigurationPolicyCRs in BGP environments. For more information about isolated networks, see Configuring isolated networks on RHOCP worker nodes. The following example shows aNetworkAttachmentDefinitionCR for the storage network with routes:apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: storage namespace: openstack spec: config: | { "cniVersion": "0.3.1", "name": "storage", "type": "bridge", "isDefaultGateway": false, "isGateway": true, "forceAddress": false, "hairpinMode": true, "ipMasq": false, "bridge": "storage", "ipam": { "type": "whereabouts", "range": "172.18.0.0/24", "range_start": "172.18.0.30", "range_end": "172.18.0.70", "routes": [ {"dst": "172.31.0.0/24", "gw": "172.18.0.1"}, {"dst": "192.168.188.0/24", "gw": "172.18.0.1"}, {"dst": "99.99.0.0/16", "gw": "172.18.0.1"} ] } }
1.9.3.3. Configuring isolated networks on data plane nodes Copy linkLink copied to clipboard!
Data plane nodes are configured by the OpenStack Operator and your OpenStackDataPlaneNodeSet custom resources (CRs). The OpenStackDataPlaneNodeSet CRs define your desired network configuration for the nodes.
Your Red Hat OpenStack Services on OpenShift (RHOSO) network configuration should reflect the existing Red Hat OpenStack Platform (RHOSP) network setup. You must pull the network_data.yaml files from each RHOSP node and reuse them when you define the OpenStackDataPlaneNodeSet CRs. The format of the configuration does not change, so you can put network templates under edpm_network_config_template variables, either for all nodes or for each node.
Procedure
Configure a
NetConfigCR with your desired VLAN tags and IPAM configuration. For example:apiVersion: network.openstack.org/v1beta1 kind: NetConfig metadata: name: netconfig spec: networks: - name: internalapi dnsDomain: internalapi.example.com subnets: - name: subnet1 allocationRanges: - end: 172.17.0.250 start: 172.17.0.100 cidr: 172.17.0.0/24 vlan: 20 - name: storage dnsDomain: storage.example.com subnets: - name: subnet1 allocationRanges: - end: 172.18.0.250 start: 172.18.0.100 cidr: 172.18.0.0/24 vlan: 21 - name: tenant dnsDomain: tenant.example.com subnets: - name: subnet1 allocationRanges: - end: 172.19.0.250 start: 172.19.0.100 cidr: 172.19.0.0/24 vlan: 22where:
- spec.networks
-
Specifies the
networkscomposition. Thenetworkscomposition must match the source cloud configuration to avoid data plane connectivity downtime.
Optional: In the
NetConfigCR, list multiple ranges for theallocationRangesfield to exclude some of the IP addresses, for example, to accommodate IP addresses that are already consumed by the adopted environment:apiVersion: network.openstack.org/v1beta1 kind: NetConfig metadata: name: netconfig spec: networks: - name: internalapi dnsDomain: internalapi.example.com subnets: - name: subnet1 allocationRanges: - end: 172.17.0.199 start: 172.17.0.100 - end: 172.17.0.250 start: 172.17.0.201 cidr: 172.17.0.0/24 vlan: 20This example excludes the
172.17.0.200address from the pool.
1.10. Configuring spine-leaf networks for the Red Hat OpenStack Services on OpenShift deployment Copy linkLink copied to clipboard!
When you adopt a Red Hat OpenStack Platform (RHOSP) deployment with spine-leaf networking, like a Distributed Compute Node (DCN) architecture, you must each L2 network segment with a separate IP subnet and create create routed provider networks. Traffic between sites is routed at L3 through spine routers or similar network infrastructure.
You must configure routing for Compute nodes at edge sites to connect with control plane services, such as RabbitMQ or the database at the central site. The cloud will not function correctly without routes configured.
DHCP relay is not supported in adopted Red Hat OpenStack Services on OpenShift (RHOSO) environments with spine-leaf topologies. This affects bare-metal provisioning scenarios that use PXE boot.
If you need to provision bare-metal nodes at edge sites, use Redfish virtual media or similar BMC virtual media features instead of PXE boot.
| Destination network | Next hop | Purpose |
|---|---|---|
| 172.17.0.0/24 | 172.17.10.1 | Route to central internalapi |
| 172.17.20.0/24 | 172.17.10.1 | Route to DCN2 internalapi |
| 172.18.0.0/24 | 172.18.10.1 | Route to central storage |
| 172.18.20.0/24 | 172.18.10.1 | Route to DCN2 storage |
You configure these routes in the edpm_network_config_template within the OpenStackDataPlaneNodeSet custom resource (CR) for each site.
| Network | Central site | DCN1 site | DCN2 site |
|---|---|---|---|
| Control plane | 192.168.122.0/24 | 192.168.133.0/24 | 192.168.144.0/24 |
| Internal API | 172.17.0.0/24 | 172.17.10.0/24 | 172.17.20.0/24 |
| Storage | 172.18.0.0/24 | 172.18.10.0/24 | 172.18.20.0/24 |
| Tenant | 172.19.0.0/24 | 172.19.10.0/24 | 172.19.20.0/24 |
When you adopt a spine-leaf deployment, you configure the NetConfig CR with multiple subnets for each service network. Each subnet represents a different site.
Example NetConfig with multiple subnets per network
apiVersion: network.openstack.org/v1beta1
kind: NetConfig
metadata:
name: netconfig
spec:
networks:
- name: ctlplane
dnsDomain: ctlplane.example.com
subnets:
- name: subnet1 # Central site
allocationRanges:
- end: 192.168.122.120
start: 192.168.122.100
cidr: 192.168.122.0/24
gateway: 192.168.122.1
- name: ctlplanedcn1 # DCN1 site
allocationRanges:
- end: 192.168.133.120
start: 192.168.133.100
cidr: 192.168.133.0/24
gateway: 192.168.133.1
- name: ctlplanedcn2 # DCN2 site
allocationRanges:
- end: 192.168.144.120
start: 192.168.144.100
cidr: 192.168.144.0/24
gateway: 192.168.144.1
- name: internalapi
dnsDomain: internalapi.example.com
subnets:
- name: subnet1 # Central site
allocationRanges:
- end: 172.17.0.250
start: 172.17.0.100
cidr: 172.17.0.0/24
vlan: 20
- name: internalapidcn1 # DCN1 site
allocationRanges:
- end: 172.17.10.250
start: 172.17.10.100
cidr: 172.17.10.0/24
vlan: 30
- name: internalapidcn2 # DCN2 site
allocationRanges:
- end: 172.17.20.250
start: 172.17.20.100
cidr: 172.17.20.0/24
vlan: 40
- Each network defines multiple subnets, one for each site.
- Each site uses unique VLAN IDs. In this example, central uses VLANs 20-23, DCN1 uses VLANs 30-33, and DCN2 uses VLANs 40-43.
-
The subnet naming convention typically uses
subnet1for the central site and site-specific names likeinternalapidcn1for edge sites.
Because the sites are geopgraphically distributed, each site requires its own provider network (physnet). The Networking service (neutron) must be configured to recognize all physnets.
Example Neutron ML2 configuration for multiple physnets
[ml2_type_vlan]
network_vlan_ranges = leaf0:1:1000,leaf1:1:1000,leaf2:1:1000
[neutron]
physnets = leaf0,leaf1,leaf2
-
leaf0corresponds to the central site. -
leaf1corresponds to the DCN1 site. -
leaf2corresponds to the DCN2 site.
When you create routed provider networks in RHOSO, you create network segments that map to these physnets:
-
Segment for central:
physnet=leaf0, subnet=192.168.122.0/24 -
Segment for DCN1:
physnet=leaf1, subnet=192.168.133.0/24 -
Segment for DCN2:
physnet=leaf2, subnet=192.168.144.0/24
1.11. Storage requirements Copy linkLink copied to clipboard!
Storage in a Red Hat OpenStack Platform (RHOSP) deployment refers to the following types:
- The storage that is needed for the service to run
- The storage that the service manages
Before you can deploy the services in Red Hat OpenStack Services on OpenShift (RHOSO), you must review the storage requirements, plan your Red Hat OpenShift Container Platform (RHOCP) node selection, prepare your RHOCP nodes, and so on.
1.11.1. Storage driver certification Copy linkLink copied to clipboard!
Before you adopt your Red Hat OpenStack Platform 17.1 deployment to a Red Hat OpenStack Services on OpenShift (RHOSO) 18.0 deployment, confirm that your deployed storage drivers are certified for use with RHOSO 18.0. For information on software certified for use with RHOSO 18.0, see the Red Hat Ecosystem Catalog.
1.11.2. Block Storage service guidelines Copy linkLink copied to clipboard!
Prepare to adopt your Block Storage service (cinder):
- Take note of the Block Storage service back ends that you use.
- Determine all the transport protocols that the Block Storage service back ends use, such as RBD, iSCSI, FC, NFS, NVMe-TCP, and so on. You must consider them when you place the Block Storage services and when the right storage transport-related binaries are running on the Red Hat OpenShift Container Platform (RHOCP) nodes. For more information about each storage transport protocol, see RHOCP preparation for Block Storage service adoption.
Use a Block Storage service volume service to deploy each Block Storage service volume back end.
For example, you have an LVM back end, a Ceph back end, and two entries in
cinderVolumes, and you cannot set global defaults for all volume services. You must define a service for each of them:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: enabled: true template: cinderVolumes: lvm: customServiceConfig: | [DEFAULT] debug = True [lvm] < . . . > ceph: customServiceConfig: | [DEFAULT] debug = True [ceph] < . . . >WarningCheck that all configuration options are still valid for RHOSO 18.0 version. Configuration options might be deprecated, removed, or added. This applies to both back-end driver-specific configuration options and other generic options.
1.11.3. Limitations for adopting the Block Storage service Copy linkLink copied to clipboard!
Before you begin the Block Storage service (cinder) adoption, review the following limitations:
-
There is no global
nodeSelectoroption for all Block Storage service volumes. You must specify thenodeSelectorfor each back end. -
There are no global
customServiceConfigorcustomServiceConfigSecretsoptions for all Block Storage service volumes. You must specify these options for each back end. - Support for Block Storage service back ends that require kernel modules that are not included in Red Hat Enterprise Linux is not tested in Red Hat OpenStack Services on OpenShift (RHOSO).
1.11.4. RHOCP preparation for Block Storage service adoption Copy linkLink copied to clipboard!
Before you deploy Red Hat OpenStack Platform (RHOSP) in Red Hat OpenShift Container Platform (RHOCP) nodes, ensure that the networks are ready, that you decide which RHOCP nodes to restrict, and that you make any necessary changes to the RHOCP nodes.
- Node selection
You might need to restrict the RHOCP nodes where the Block Storage service volume and backup services run.
An example of when you need to restrict nodes for a specific Block Storage service is when you deploy the Block Storage service with the LVM driver. In that scenario, the LVM data where the volumes are stored only exists in a specific host, so you need to pin the Block Storage-volume service to that specific RHOCP node. Running the service on any other RHOCP node does not work. You cannot use the RHOCP host node name to restrict the LVM back end. You need to identify the LVM back end by using a unique label, an existing label, or a new label:
$ oc label nodes worker0 lvm=cinder-volumesapiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: secret: osp-secret storageClass: local-storage cinder: enabled: true template: cinderVolumes: lvm-iscsi: nodeSelector: lvm: cinder-volumes < . . . >For more information about node selection, see About node selectors.
NoteIf your nodes do not have enough local disk space for temporary images, you can use a remote NFS location by setting the extra volumes feature,
extraMounts.- Transport protocols
Some changes to the storage transport protocols might be required for RHOCP:
-
If you use a
MachineConfigto make changes to RHOCP nodes, the nodes reboot. -
Check the back-end sections that are listed in the
enabled_backendsconfiguration option in yourcinder.conffile to determine the enabled storage back-end sections. -
Depending on the back end, you can find the transport protocol by viewing the
volume_driverortarget_protocolconfiguration options. The
iscsidservice,multipathdservice, andNVMe-TCPkernel modules start automatically on data plane nodes.- NFS
- RHOCP connects to NFS back ends without additional changes.
- Rados Block Device and Red Hat Ceph Storage
- RHOCP connects to Red Hat Ceph Storage back ends without additional changes. You must provide credentials and configuration files to the services.
- iSCSI
- To connect to iSCSI volumes, the iSCSI initiator must run on the RHOCP hosts where the volume and backup services run. The Linux Open iSCSI initiator does not support network namespaces, so you must only run one instance of the service for the normal RHOCP usage, as well as the RHOCP CSI plugins and the RHOSP services.
If you are not already running
iscsidon the RHOCP nodes, then you must apply aMachineConfig. For example:apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker service: cinder name: 99-master-cinder-enable-iscsid spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: iscsid.service-
If you use labels to restrict the nodes where the Block Storage services run, you must use a
MachineConfigPoolto limit the effects of theMachineConfigto the nodes where your services might run. For more information, see About node selectors. -
If you are using a single node deployment to test the process, replace
workerwithmasterin theMachineConfig. - For production deployments that use iSCSI volumes, configure multipathing for better I/O.
- FC
- The Block Storage service volume and Block Storage service backup services must run in an RHOCP host that has host bus adapters (HBAs). If some nodes do not have HBAs, then use labels to restrict where these services run. For more information, see About node selectors.
- If the Image service is configured to use Block Storage service as a back end with FC, the Image service must also run on an RHOCP host that has HBAs and follow the same node selection requirements as the Block Storage service.
- If you have virtualized RHOCP clusters that use FC you need to expose the host HBAs inside the virtual machine.
- For production deployments that use FC volumes, configure multipathing for better I/O.
- NVMe-TCP
- To connect to NVMe-TCP volumes, load NVMe-TCP kernel modules on the RHOCP hosts.
If you do not already load the
nvme-fabricsmodule on the RHOCP nodes where the volume and backup services are going to run, then you must apply aMachineConfig. For example:apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker service: cinder name: 99-master-cinder-load-nvme-fabrics spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/modules-load.d/nvme_fabrics.conf overwrite: false # Mode must be decimal, this is 0644 mode: 420 user: name: root group: name: root contents: # Source can be a http, https, tftp, s3, gs, or data as defined in rfc2397. # This is the rfc2397 text/plain string format source: data:,nvme-fabrics-
If you use labels to restrict the nodes where Block Storage services run, use a
MachineConfigPoolto limit the effects of theMachineConfigto the nodes where your services run. For more information, see About node selectors. -
If you use a single node deployment to test the process, replace
workerwithmasterin theMachineConfig. -
Only load the
nvme-fabricsmodule because it loads the transport-specific modules, such as TCP, RDMA, or FC, as needed. - For production deployments that use NVMe-TCP volumes, use multipathing for better I/O. For NVMe-TCP volumes, RHOCP uses native multipathing, called ANA.
After the RHOCP nodes reboot and load the
nvme-fabricsmodule, you can confirm that the operating system is configured and that it supports ANA by checking the host:$ cat /sys/module/nvme_core/parameters/multipathImportantANA does not use the Linux Multipathing Device Mapper, but RHOCP requires
multipathdto run on Compute nodes for the Compute service (nova) to be able to use multipathing. Multipathing is automatically configured on data plane nodes when they are provisioned.
- Multipathing
Use multipathing for iSCSI and FC protocols. To configure multipathing on these protocols, you perform the following tasks:
- Prepare the RHOCP hosts
- Configure the Block Storage services
- Prepare the Compute service nodes
- Configure the Compute service
To prepare the RHOCP hosts, ensure that the Linux Multipath Device Mapper is configured and running on the RHOCP hosts by using
MachineConfig. For example:# Includes the /etc/multipathd.conf contents and the systemd unit changes apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker service: cinder name: 99-master-cinder-enable-multipathd spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/multipath.conf overwrite: false # Mode must be decimal, this is 0600 mode: 384 user: name: root group: name: root contents: # Source can be a http, https, tftp, s3, gs, or data as defined in rfc2397. # This is the rfc2397 text/plain string format source: data:,defaults%20%7B%0A%20%20user_friendly_names%20no%0A%20%20recheck_wwid%20yes%0A%20%20skip_kpartx%20yes%0A%20%20find_multipaths%20yes%0A%7D%0A%0Ablacklist%20%7B%0A%7D systemd: units: - enabled: true name: multipathd.service-
If you use labels to restrict the nodes where Block Storage services run, you need to use a
MachineConfigPoolto limit the effects of theMachineConfigto only the nodes where your services run. For more information, see About node selectors. -
If you are using a single node deployment to test the process, replace
workerwithmasterin theMachineConfig. - Cinder volume and backup are configured by default to use multipathing.
-
If you use a
1.11.5. Converting the Block Storage service configuration Copy linkLink copied to clipboard!
In your previous deployment, you use the same cinder.conf file for all the services. To prepare your Block Storage service (cinder) configuration for adoption, split this single-file configuration into individual configurations for each Block Storage service service. Review the following information to guide you in coverting your previous configuration:
-
Determine what part of the configuration is generic for all the Block Storage services and remove anything that would change when deployed in Red Hat OpenShift Container Platform (RHOCP), such as the
connectionin the[database]section, thetransport_urlandlog_dirin the[DEFAULT]sections, the whole[coordination]and[barbican]sections. The remaining generic configuration goes into thecustomServiceConfigoption, or aSecretcustom resource (CR) and is then used in thecustomServiceConfigSecretssection, at thecinder: template:level. -
Determine if there is a scheduler-specific configuration and add it to the
customServiceConfigoption incinder: template: cinderScheduler. -
Determine if there is an API-specific configuration and add it to the
customServiceConfigoption incinder: template: cinderAPI. -
If the Block Storage service backup is deployed, add the Block Storage service backup configuration options to
customServiceConfigoption, or to aSecretCR that you can add tocustomServiceConfigSecretssection at thecinder: template: cinderBackup:level. Remove thehostconfiguration in the[DEFAULT]section to support multiple replicas later. -
Determine the individual volume back-end configuration for each of the drivers. The configuration is in the specific driver section, and it includes the
[backend_defaults]section and FC zoning sections if you use them. The Block Storage service operator does not support a globalcustomServiceConfigoption for all volume services. Each back end has its own section undercinder: template: cinderVolumes, and the configuration goes in thecustomServiceConfigoption or in aSecretCR and is then used in thecustomServiceConfigSecretssection. If any of the Block Storage service volume drivers require a custom vendor image, find the location of the image in the Red Hat Ecosystem Catalog, and create or modify an
OpenStackVersionCR to specify the custom image by using the key from thecinderVolumessection.For example, if you have the following configuration:
spec: cinder: enabled: true template: cinderVolume: pure: customServiceConfigSecrets: - openstack-cinder-pure-cfg < . . . >Then the
OpenStackVersionCR that describes the container image for that back end looks like the following example:apiVersion: core.openstack.org/v1beta1 kind: OpenStackVersion metadata: name: openstack spec: customContainerImages: cinderVolumeImages: pure: registry.connect.redhat.com/purestorage/openstack-cinder-volume-pure-rhosp-18-0'NoteThe name of the
OpenStackVersionmust match the name of yourOpenStackControlPlaneCR.If your Block Storage services use external files, for example, for a custom policy, or to store credentials or SSL certificate authority bundles to connect to a storage array, make those files available to the right containers. Use
SecretsorConfigMapto store the information in RHOCP and then in theextraMountskey. For example, for Red Hat Ceph Storage credentials that are stored in aSecretcalledceph-conf-files, you patch the top-levelextraMountskey in theOpenstackControlPlaneCR:spec: extraMounts: - extraVol: - extraVolType: Ceph mounts: - mountPath: /etc/ceph name: ceph readOnly: true propagation: - CinderVolume - CinderBackup - Glance volumes: - name: ceph projected: sources: - secret: name: ceph-conf-filesFor a service-specific file, such as the API policy, you add the configuration on the service itself. In the following example, you include the
CinderAPIconfiguration that references the policy you are adding from aConfigMapcalledmy-cinder-confthat has apolicykey with the contents of the policy:spec: cinder: enabled: true template: cinderAPI: customServiceConfig: | [oslo_policy] policy_file=/etc/cinder/api/policy.yaml extraMounts: - extraVol: - extraVolType: Ceph mounts: - mountPath: /etc/cinder/api name: policy readOnly: true propagation: - CinderAPI volumes: - name: policy projected: sources: - configMap: name: my-cinder-conf items: - key: policy path: policy.yaml
1.11.6. Changes to CephFS through NFS Copy linkLink copied to clipboard!
Before you begin the adoption, review the following information to understand the changes to CephFS through NFS between Red Hat OpenStack Platform (RHOSP) 17.1 and Red Hat OpenStack Services on OpenShift (RHOSO) 18.0:
-
If the RHOSP 17.1 deployment uses CephFS through NFS as a back end for Shared File Systems service (manila), you cannot directly import the
ceph-nfsservice on the RHOSP Controller nodes into RHOSO 18.0. In RHOSO 18.0, the Shared File Systems service only supports using a clustered NFS service that is directly managed on the Red Hat Ceph Storage cluster. Adoption with theceph-nfsservice involves a data path disruption to existing NFS clients. -
On RHOSP 17.1, Pacemaker manages the high availability of the
ceph-nfsservice. This service is assigned a Virtual IP (VIP) address that is also managed by Pacemaker. The VIP is typically created on an isolatedStorageNFSnetwork. The Controller nodes have ordering and collocation constraints established between this VIP,ceph-nfs, and the Shared File Systems service (manila) share manager service. Prior to adopting Shared File Systems service, you must adjust the Pacemaker ordering and collocation constraints to separate the share manager service. This establishesceph-nfswith its VIP as an isolated, standalone NFS service that you can decommission after completing the RHOSO adoption. - In Red Hat Ceph Storage 7, a native clustered Ceph NFS service has to be deployed on the Red Hat Ceph Storage cluster by using the Ceph Orchestrator prior to adopting the Shared File Systems service. This NFS service eventually replaces the standalone NFS service from RHOSP 17.1 in your deployment. When the Shared File Systems service is adopted into the RHOSO 18.0 environment, it establishes all the existing exports and client restrictions on the new clustered Ceph NFS service. Clients can continue to read and write data on existing NFS shares, and are not affected until the old standalone NFS service is decommissioned. After the service is decommissioned, you can re-mount the same share from the new clustered Ceph NFS service during a scheduled downtime.
-
To ensure that NFS users are not required to make any networking changes to their existing workloads, assign an IP address from the same isolated
StorageNFSnetwork to the clustered Ceph NFS service. NFS users only need to discover and re-mount their shares by using new export paths. When the adoption is complete, RHOSO users can query the Shared File Systems service API to list the export locations on existing shares to identify the preferred paths to mount these shares. These preferred paths correspond to the new clustered Ceph NFS service in contrast to other non-preferred export paths that continue to be displayed until the old isolated, standalone NFS service is decommissioned. - When you migrate your workloads from the old NFS service, you must ensure that exports are not consumed from both the old NFS service and the new clustered Ceph NFS service at the same time. This simultaneous access to both services is considered dangerous and bypasses the protections for concurrent access that is ensured by the NFS protocol. When you migrate the workloads to use exports from the new NFS service, you must ensure that you migrate the use of each export entirely so that no part of the workload stays connected to the old NFS service.
-
You can no longer control the old Pacemaker-managed
ceph-nfsservice through the Red Hat OpenStack Platform director after the control plane adoption is complete. This means that there is no support for updating the NFS Ganesha software, or changing any configuration. While data is protected from server crashes or restarts, high availability and data recovery is still limited, and these maintenance issues are no longer visible to Shared File Systems service. - Cloud administrators must ensure a reasonably short window to switch over all end-user workloads to the new NFS service.
-
While the old
ceph-nfsservice only supported NFS version 4.1 and later, the new clustered NFS service supports NFS protocols 3 and 4.1 and later. Mixing protocol versions with an export results in unintended consequences. You should mount a given share across all clients by using a consistent NFS protocol version.
1.12. Red Hat Ceph Storage prerequisites Copy linkLink copied to clipboard!
Before you migrate your Red Hat Ceph Storage cluster daemons from your Controller nodes, you must complete the following tasks in your Red Hat OpenStack Platform 17.1 environment to prepare for the Red Hat OpenStack Services on OpenShift (RHOSO) adoption.
- Upgrade your Red Hat Ceph Storage cluster to release 7. For more information, see "Upgrading Red Hat Ceph Storage 6 to 7" in Framework for upgrades (16.2 to 17.1).
-
Your Red Hat Ceph Storage 7 deployment is managed by
cephadm. - The undercloud is still available, and the nodes and networks are managed by director.
-
If you use an externally deployed Red Hat Ceph Storage cluster, you must recreate a
ceph-nfscluster in the target nodes as well as propogate theStorageNFSnetwork. Complete the prerequisites for your specific Red Hat Ceph Storage environment:
- Red Hat Ceph Storage with monitoring stack components
- Red Hat Ceph Storage RGW
- Red Hat Ceph Storage RBD
- NFS Ganesha
1.12.1. Completing prerequisites for a Red Hat Ceph Storage cluster with monitoring stack components Copy linkLink copied to clipboard!
Before you migrate a Red Hat Ceph Storage cluster with monitoring stack components, you must gather monitoring stack information, review and update the container image registry, and remove the undercloud container images.
In addition to updating the container images related to the monitoring stack, you must update the configuration entry related to the container_image_base. This has an impact on all the Red Hat Ceph Storage daemons that rely on the undercloud images. New daemons are deployed by using the new image registry location that is configured in the Red Hat Ceph Storage cluster.
Procedure
Gather the current status of the monitoring stack. Verify that the hosts have no
monitoringlabel, orgrafana,prometheus, oralertmanager, in cases of a per daemons placement evaluation:NoteThe entire relocation process is driven by
cephadmand relies on labels to be assigned to the target nodes, where the daemons are scheduled. For more information about assigning labels to nodes, review the Red Hat Knowledgebase article Red Hat Ceph Storage: Supported configurations.[tripleo-admin@controller-0 ~]$ sudo cephadm shell -- ceph orch host ls HOST ADDR LABELS STATUS cephstorage-0.redhat.local 192.168.24.11 osd mds cephstorage-1.redhat.local 192.168.24.12 osd mds cephstorage-2.redhat.local 192.168.24.47 osd mds controller-0.redhat.local 192.168.24.35 _admin mon mgr controller-1.redhat.local 192.168.24.53 mon _admin mgr controller-2.redhat.local 192.168.24.10 mon _admin mgr 6 hosts in clusterConfirm that the cluster is healthy and that both
ceph orch lsandceph orch psreturn the expected number of deployed daemons.Review and update the container image registry:
NoteIf you run the Red Hat Ceph Storage externalization procedure after you migrate the Red Hat OpenStack Platform control plane, update the container images in the Red Hat Ceph Storage cluster configuration. The current container images point to the undercloud registry, which might not be available anymore. Because the undercloud is not available after adoption is complete, replace the undercloud-provided images with an alternative registry.
$ ceph config dump ... ... mgr advanced mgr/cephadm/container_image_alertmanager undercloud-0.ctlplane.redhat.local:8787/rh-osbs/openshift-ose-prometheus-alertmanager:v4.10 mgr advanced mgr/cephadm/container_image_base undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhceph mgr advanced mgr/cephadm/container_image_grafana undercloud-0.ctlplane.redhat.local:8787/rh-osbs/grafana:latest mgr advanced mgr/cephadm/container_image_node_exporter undercloud-0.ctlplane.redhat.local:8787/rh-osbs/openshift-ose-prometheus-node-exporter:v4.10 mgr advanced mgr/cephadm/container_image_prometheus undercloud-0.ctlplane.redhat.local:8787/rh-osbs/openshift-ose-prometheus:v4.10Remove the undercloud container images:
$ cephadm shell -- ceph config rm mgr mgr/cephadm/container_image_base \ for i in prometheus grafana alertmanager node_exporter; do \ cephadm shell -- ceph config rm mgr mgr/cephadm/container_image_$i \ done
1.12.2. Completing prerequisites for Red Hat Ceph Storage RGW migration Copy linkLink copied to clipboard!
Complete the following prerequisites before you begin the Ceph Object Gateway (RGW) migration.
Procedure
Check the current status of the Red Hat Ceph Storage nodes:
(undercloud) [stack@undercloud-0 ~]$ metalsmith list +------------------------+ +----------------+ | IP Addresses | | Hostname | +------------------------+ +----------------+ | ctlplane=192.168.24.25 | | cephstorage-0 | | ctlplane=192.168.24.10 | | cephstorage-1 | | ctlplane=192.168.24.32 | | cephstorage-2 | | ctlplane=192.168.24.28 | | compute-0 | | ctlplane=192.168.24.26 | | compute-1 | | ctlplane=192.168.24.43 | | controller-0 | | ctlplane=192.168.24.7 | | controller-1 | | ctlplane=192.168.24.41 | | controller-2 | +------------------------+ +----------------+Log in to
controller-0and check the Pacemaker status to identify important information for the RGW migration:Full List of Resources: * ip-192.168.24.46 (ocf:heartbeat:IPaddr2): Started controller-0 * ip-10.0.0.103 (ocf:heartbeat:IPaddr2): Started controller-1 * ip-172.17.1.129 (ocf:heartbeat:IPaddr2): Started controller-2 * ip-172.17.3.68 (ocf:heartbeat:IPaddr2): Started controller-0 * ip-172.17.4.37 (ocf:heartbeat:IPaddr2): Started controller-1 * Container bundle set: haproxy-bundle [undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp17-openstack-haproxy:pcmklatest]: * haproxy-bundle-podman-0 (ocf:heartbeat:podman): Started controller-2 * haproxy-bundle-podman-1 (ocf:heartbeat:podman): Started controller-0 * haproxy-bundle-podman-2 (ocf:heartbeat:podman): Started controller-1Identify the ranges of the storage networks. The following is an example and the values might differ in your environment:
[heat-admin@controller-0 ~]$ ip -o -4 a 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever 2: enp1s0 inet 192.168.24.45/24 brd 192.168.24.255 scope global enp1s0\ valid_lft forever preferred_lft forever 2: enp1s0 inet 192.168.24.46/32 brd 192.168.24.255 scope global enp1s0\ valid_lft forever preferred_lft forever 7: br-ex inet 10.0.0.122/24 brd 10.0.0.255 scope global br-ex\ valid_lft forever preferred_lft forever 8: vlan70 inet 172.17.5.22/24 brd 172.17.5.255 scope global vlan70\ valid_lft forever preferred_lft forever 8: vlan70 inet 172.17.5.94/32 brd 172.17.5.255 scope global vlan70\ valid_lft forever preferred_lft forever 9: vlan50 inet 172.17.2.140/24 brd 172.17.2.255 scope global vlan50\ valid_lft forever preferred_lft forever 10: vlan30 inet 172.17.3.73/24 brd 172.17.3.255 scope global vlan30\ valid_lft forever preferred_lft forever 10: vlan30 inet 172.17.3.68/32 brd 172.17.3.255 scope global vlan30\ valid_lft forever preferred_lft forever 11: vlan20 inet 172.17.1.88/24 brd 172.17.1.255 scope global vlan20\ valid_lft forever preferred_lft forever 12: vlan40 inet 172.17.4.24/24 brd 172.17.4.255 scope global vlan40\ valid_lft forever preferred_lft forever-
br-exrepresents the External Network, where in the current environment, HAProxy has the front-end Virtual IP (VIP) assigned. -
vlan30represents the Storage Network, where the new RGW instances should be started on the Red Hat Ceph Storage nodes.
-
Identify the network that you previously had in HAProxy and propagate it through director to the Red Hat Ceph Storage nodes. Use this network to reserve a new VIP that is owned by Red Hat Ceph Storage as the entry point for the RGW service.
Log in to
controller-0and find theceph_rgwsection in the current HAProxy configuration:$ less /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg ... ... listen ceph_rgw bind 10.0.0.103:8080 transparent bind 172.17.3.68:8080 transparent mode http balance leastconn http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto http if !{ ssl_fc } http-request set-header X-Forwarded-Port %[dst_port] option httpchk GET /swift/healthcheck option httplog option forwardfor server controller-0.storage.redhat.local 172.17.3.73:8080 check fall 5 inter 2000 rise 2 server controller-1.storage.redhat.local 172.17.3.146:8080 check fall 5 inter 2000 rise 2 server controller-2.storage.redhat.local 172.17.3.156:8080 check fall 5 inter 2000 rise 2Confirm that the network is used as an HAProxy front end. The following example shows that
controller-0exposes the services by using the external network, which is absent from the Red Hat Ceph Storage nodes. You must propagate the external network through director:[controller-0]$ ip -o -4 a ... 7: br-ex inet 10.0.0.106/24 brd 10.0.0.255 scope global br-ex\ valid_lft forever preferred_lft forever ...NoteIf the target nodes are not managed by director, you cannot use this procedure to configure the network. An administrator must manually configure all the required networks.
Propagate the HAProxy front-end network to Red Hat Ceph Storage nodes.
In the NIC template that you use to define the
ceph-storagenetwork interfaces, add the new config section in the Red Hat Ceph Storage network configuration template file, for example,/home/stack/composable_roles/network/nic-configs/ceph-storage.j2:--- network_config: - type: interface name: nic1 use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: {{ ctlplane_host_routes }} - type: vlan vlan_id: {{ storage_mgmt_vlan_id }} device: nic1 addresses: - ip_netmask: {{ storage_mgmt_ip }}/{{ storage_mgmt_cidr }} routes: {{ storage_mgmt_host_routes }} - type: interface name: nic2 use_dhcp: false defroute: false - type: vlan vlan_id: {{ storage_vlan_id }} device: nic2 addresses: - ip_netmask: {{ storage_ip }}/{{ storage_cidr }} routes: {{ storage_host_routes }} - type: ovs_bridge name: {{ neutron_physical_bridge_name }} dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} use_dhcp: false addresses: - ip_netmask: {{ external_ip }}/{{ external_cidr }} routes: {{ external_host_routes }} members: [] - type: interface name: nic3 primary: trueAdd the External Network to the bare metal file, for example,
/home/stack/composable_roles/network/baremetal_deployment.yamlthat is used bymetalsmith:NoteEnsure that network_config_update is enabled for network propagation to the target nodes when
os-net-configis triggered.- name: CephStorage count: 3 hostname_format: cephstorage-%index% instances: - hostname: cephstorage-0 name: ceph-0 - hostname: cephstorage-1 name: ceph-1 - hostname: cephstorage-2 name: ceph-2 defaults: profile: ceph-storage network_config: template: /home/stack/composable_roles/network/nic-configs/ceph-storage.j2 network_config_update: true networks: - network: ctlplane vif: true - network: storage - network: storage_mgmt - network: externalConfigure the new network on the bare metal nodes:
(undercloud) [stack@undercloud-0]$ openstack overcloud node provision \ -o overcloud-baremetal-deployed-0.yaml \ --stack overcloud \ --network-config -y \ $PWD/composable_roles/network/baremetal_deployment.yamlVerify that the new network is configured on the Red Hat Ceph Storage nodes:
[root@cephstorage-0 ~]# ip -o -4 a 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever 2: enp1s0 inet 192.168.24.54/24 brd 192.168.24.255 scope global enp1s0\ valid_lft forever preferred_lft forever 11: vlan40 inet 172.17.4.43/24 brd 172.17.4.255 scope global vlan40\ valid_lft forever preferred_lft forever 12: vlan30 inet 172.17.3.23/24 brd 172.17.3.255 scope global vlan30\ valid_lft forever preferred_lft forever 14: br-ex inet 10.0.0.133/24 brd 10.0.0.255 scope global br-ex\ valid_lft forever preferred_lft forever
1.12.3. Completing prerequisites for a Red Hat Ceph Storage RBD migration Copy linkLink copied to clipboard!
Complete the following prerequisites before you begin the Red Hat Ceph Storage Rados Block Device (RBD) migration.
-
The target CephStorage or ComputeHCI nodes are configured to have both
storageandstorage_mgmtnetworks. This ensures that you can use both Red Hat Ceph Storage public and cluster networks from the same node. From Red Hat OpenStack Platform 17.1 and later you do not have to run a stack update. -
NFS Ganesha is migrated from a director deployment to
cephadm. For more information, see "Creating an NFS Ganesha cluster". - Ceph Metadata Server, monitoring stack, Ceph Object Gateway, and any other daemon that is deployed on Controller nodes.
- The daemons distribution follows the cardinality constraints that are described in "Red Hat Ceph Storage: Supported configurations".
-
The Red Hat Ceph Storage cluster is healthy, and the
ceph -scommand returnsHEALTH_OK. Run
os-net-configon the bare metal node and configure additional networks:If target nodes are
CephStorage, ensure that the network is defined in the bare metal file for theCephStoragenodes, for example,/home/stack/composable_roles/network/baremetal_deployment.yaml:- name: CephStorage count: 2 instances: - hostname: oc0-ceph-0 name: oc0-ceph-0 - hostname: oc0-ceph-1 name: oc0-ceph-1 defaults: networks: - network: ctlplane vif: true - network: storage_cloud_0 subnet: storage_cloud_0_subnet - network: storage_mgmt_cloud_0 subnet: storage_mgmt_cloud_0_subnet network_config: template: templates/single_nic_vlans/single_nic_vlans_storage.j2Add the missing network:
$ openstack overcloud node provision \ -o overcloud-baremetal-deployed-0.yaml --stack overcloud-0 \ /--network-config -y --concurrency 2 /home/stack/metalsmith-0.yamlVerify that the storage network is configured on the target nodes:
(undercloud) [stack@undercloud ~]$ ssh heat-admin@192.168.24.14 ip -o -4 a 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever 5: br-storage inet 192.168.24.14/24 brd 192.168.24.255 scope global br-storage\ valid_lft forever preferred_lft forever 6: vlan1 inet 192.168.24.14/24 brd 192.168.24.255 scope global vlan1\ valid_lft forever preferred_lft forever 7: vlan11 inet 172.16.11.172/24 brd 172.16.11.255 scope global vlan11\ valid_lft forever preferred_lft forever 8: vlan12 inet 172.16.12.46/24 brd 172.16.12.255 scope global vlan12\ valid_lft forever preferred_lft forever
1.12.4. Creating an NFS Ganesha cluster Copy linkLink copied to clipboard!
If you use CephFS through NFS with the Shared File Systems service (manila), you must create a new clustered NFS service on the Red Hat Ceph Storage cluster. This service replaces the standalone, Pacemaker-controlled ceph-nfs service that you use in Red Hat OpenStack Platform (RHOSP) 17.1.
Procedure
Identify the Red Hat Ceph Storage nodes to deploy the new clustered NFS service, for example,
cephstorage-0,cephstorage-1,cephstorage-2.NoteYou must deploy this service on the
StorageNFSisolated network so that you can mount your existing shares through the new NFS export locations. You can deploy the new clustered NFS service on your existing CephStorage nodes or HCI nodes, or on new hardware that you enrolled in the Red Hat Ceph Storage cluster.If you deployed your Red Hat Ceph Storage nodes with director, propagate the
StorageNFSnetwork to the target nodes where theceph-nfsservice is deployed.NoteIf the target nodes are not managed by director, you cannot use this procedure to configure the network. An administrator must manually configure all the required networks.
-
Identify the node definition file,
overcloud-baremetal-deploy.yaml, that is used in the RHOSP environment. For more information about identifying theovercloud-baremetal-deploy.yamlfile, see Customizing overcloud networks in Customizing the Red Hat OpenStack Services on OpenShift deployment. Edit the networks that are associated with the Red Hat Ceph Storage nodes to include the
StorageNFSnetwork:- name: CephStorage count: 3 hostname_format: cephstorage-%index% instances: - hostname: cephstorage-0 name: ceph-0 - hostname: cephstorage-1 name: ceph-1 - hostname: cephstorage-2 name: ceph-2 defaults: profile: ceph-storage network_config: template: /home/stack/network/nic-configs/ceph-storage.j2 network_config_update: true networks: - network: ctlplane vif: true - network: storage - network: storage_mgmt - network: storage_nfsEdit the network configuration template file, for example,
/home/stack/network/nic-configs/ceph-storage.j2, for the Red Hat Ceph Storage nodes to include an interface that connects to theStorageNFSnetwork:- type: vlan device: nic2 vlan_id: {{ storage_nfs_vlan_id }} addresses: - ip_netmask: {{ storage_nfs_ip }}/{{ storage_nfs_cidr }} routes: {{ storage_nfs_host_routes }}Update the Red Hat Ceph Storage nodes:
$ openstack overcloud node provision \ --stack overcloud \ --network-config -y \ -o overcloud-baremetal-deployed-storage_nfs.yaml \ --concurrency 2 \ /home/stack/network/baremetal_deployment.yamlWhen the update is complete, ensure that a new interface is created in theRed Hat Ceph Storage nodes and that they are tagged with the VLAN that is associated with
StorageNFS.
-
Identify the node definition file,
Identify the IP address from the
StorageNFSnetwork to use as the Virtual IP address (VIP) for the Ceph NFS service:$ openstack port list -c "Fixed IP Addresses" --network storage_nfsIn a running
cephadmshell, identify the hosts for the NFS service:$ ceph orch host lsLabel each host that you identified. Repeat this command for each host that you want to label:
$ ceph orch host label add <hostname> nfs-
Replace
<hostname>with the name of the host that you identified.
-
Replace
Create the NFS cluster:
$ ceph nfs cluster create cephfs \ "label:nfs" \ --ingress \ --virtual-ip=<VIP> \ --ingress-mode=haproxy-protocolReplace
<VIP>with the VIP for the Ceph NFS service.NoteYou must set the
ingress-modeargument tohaproxy-protocol. No other ingress-mode is supported. This ingress mode allows you to enforce client restrictions through the Shared File Systems service.For more information about deploying the clustered Ceph NFS service, see "Management of NFS-Ganesha gateway using the Ceph Orchestrator" in the Operations Guide for your Red Hat Ceph Storage version:
Check the status of the NFS cluster:
$ ceph nfs cluster ls $ ceph nfs cluster info cephfs
1.13. Preparing an Instance HA deployment for adoption Copy linkLink copied to clipboard!
To enable the high availability for Compute instances (Instance HA) service after you adopt the Red Hat OpenStack Services on OpenShift (RHOSO) 18.0 data plane, perform the following preparation tasks:
- Create a fencing configuration file to use after you adopt the RHOSO data plane.
- Prevent Pacemaker from monitoring or recovering the Compute nodes.
1.13.1. Maintaining the Instance HA functionality after adoption Copy linkLink copied to clipboard!
To maintain the high availability for Compute instances (Instance HA) functionality after you adopt Red Hat OpenStack Services on OpenShift 18.0, create a fencing configuration file to use in your adopted environment.
Procedure
-
Gather the fencing information from the
fencing.yamlfile in your Red Hat OpenStack Platform (RHOSP) 17.1 cluster. Retrieve the RHOSP 17.1 stonith configuration from any of your overcloud Controller nodes:
$ sudo pcs configStonith Devices: ... Resource: stonith-fence_ipmilan-525400dde4f7 (class=stonith type=fence_ipmilan) Attributes: stonith-fence_ipmilan-525400dde4f7-instance_attributes delay=20 ipaddr=172.16.0.1 ipport=6231 lanplus=true login=admin passwd=password pcmk_host_list=compute-1 Operations: monitor: stonith-fence_ipmilan-525400dde4f7-monitor-interval-60s interval=60s Resource: stonith-fence_ipmilan-525400819ad3 (class=stonith type=fence_ipmilan) Attributes: stonith-fence_ipmilan-525400819ad3-instance_attributes delay=20 ipaddr=172.16.0.1 ipport=6230 lanplus=true login=admin passwd=password pcmk_host_list=compute-0 Operations: monitor: stonith-fence_ipmilan-525400819ad3-monitor-interval-60s interval=60s ...Generate the fencing configuration file:
- To install the script that automatically generates this file, see How do I automatically generate fencing secret for RHOSO18 instanceha from a osp17.1 cluster that I want to adopt?.
- To create the fencing configuration file manually, see Configuring the fencing of Compute nodes in Configuring high availability for instances.
1.13.2. Preventing Pacemaker from monitoring Compute nodes Copy linkLink copied to clipboard!
You must disable Pacemaker so that it does not monitor your Compute nodes during the adoption. For example, if a network issue occurs during the adoption, Pacemaker attempts to reboot the Compute nodes to recover them, which breaks the adoption.
Procedure
Retrieve the names of the Compute remote resources:
$ sudo pcs stonith |grep -B1 stonith-fence_compute-fence-nova |grep Target |awk -F ': ' '{print $2}'Disable the stonith and
pacemaker_remoteresources on each Compute remote resource:$ sudo pcs property set stonith-enabled=false $ sudo pcs resource disable <compute_remote_resource>where:
<compute_remote_resource>- Specifies the name of the Compute remote resource in your environment.
Retrieve the name of the Compute stonith resources:
$ sudo pcs stonith |grep Level |grep fence_compute |awk '{print $4}' |awk -F ',' '{print $1}' |sort |uniqRemove the Compute node
pacemaker_remoteand fencing resources:$ sudo pcs stonith disable stonith-fence_compute-fence-nova $ sudo pcs stonith disable <compute_stonith_resource> $ sudo pcs stonith delete <compute_stonith_resource> $ sudo pcs resource delete <compute_remote_resource> $ sudo pcs resource disable compute-unfence-trigger-clone $ sudo pcs resource delete compute-unfence-trigger-clone $ sudo pcs resource disable nova-evacuate $ sudo pcs resource delete nova-evacuatewhere:
<compute_stonith_resource>- Specifies the name of the Compute stonith resource in your environment.
1.14. Comparing configuration files between deployments Copy linkLink copied to clipboard!
To help you manage the configuration for your director and Red Hat OpenStack Platform (RHOSP) services, you can compare the configuration files between your director deployment and the Red Hat OpenStack Services on OpenShift (RHOSO) cloud by using the os-diff tool.
Prerequisites
Golang is installed and configured on your environment:
dnf install -y golang-github-openstack-k8s-operators-os-diff
Procedure
Configure the
/etc/os-diff/os-diff.cfgfile and the/etc/os-diff/ssh.configfile according to your environment. To allow os-diff to connect to your clouds and pull files from the services that you describe in theconfig.yamlfile, you must set the following options in theos-diff.cfgfile:[Default] local_config_dir=/tmp/ service_config_file=config.yaml [Tripleo] ssh_cmd=ssh -F ssh.config director_host=standalone container_engine=podman connection=ssh remote_config_path=/tmp/tripleo local_config_path=/tmp/ [Openshift] ocp_local_config_path=/tmp/ocp connection=local ssh_cmd=""-
ssh_cmd=ssh -F ssh.configinstructs os-diff to access your director host through SSH. The default value isssh -F ssh.config. However, you can set the value without an ssh.config file, for example,ssh -i /home/user/.ssh/id_rsa stack@my.undercloud.local. -
director_host=standalonespecifies the host to use to access your cloud, and the podman/docker binary is installed and allowed to interact with the running containers. You can leave this key blank.
-
If you use a host file to connect to your cloud, configure the
ssh.configfile to allow os-diff to access your RHOSP environment, for example:Host * IdentitiesOnly yes Host virthost Hostname virthost IdentityFile ~/.ssh/id_rsa User root StrictHostKeyChecking no UserKnownHostsFile=/dev/null Host standalone Hostname standalone IdentityFile <path to SSH key> User root StrictHostKeyChecking no UserKnownHostsFile=/dev/null Host crc Hostname crc IdentityFile ~/.ssh/id_rsa User stack StrictHostKeyChecking no UserKnownHostsFile=/dev/null-
Replace
<path to SSH key>with the path to your SSH key. You must provide a value forIdentityFileto get full working access to your RHOSP environment.
-
Replace
If you use an inventory file to connect to your cloud, generate the
ssh.configfile from your Ansible inventory, for example,tripleo-ansible-inventory.yamlfile:$ os-diff configure -i tripleo-ansible-inventory.yaml -o ssh.config --yaml
Verification
Test your connection:
$ ssh -F ssh.config standalone
1.15. Preventing configuration loss when using the oc patch command Copy linkLink copied to clipboard!
When you use the oc patch command to modify a resource, the changes are applied directly to the live object in your OpenShift cluster. If you later edit the custom resource (CR) file for the resource and apply the updates by using oc apply -f <filename>, your previous patched changes are overwritten and lost from the resource.
To prevent loss of configuration, you can use the --patch-file option to configure the patch and retain patch files. Alternatively, you can export your openstackcontrolplane CR after the patch is applied:
$ oc get <resource_type> <resource_name> -o yaml > <filename>.yaml
For example:
$ oc get OpenStackControlPlane openstack-control-plane -o yaml > openstack_control_plane.yaml