Chapter 2. Migrating the ML2 mechanism driver from OVS to OVN
2.1. Preparing the environment for migration to the OVN mechanism driver
Environment assessment and preparation is critical to a successful migration. Your Red Hat Technical Account Manager or Global Professional Services will guide you through these steps.
Prerequisites
- Your deployment is the latest RHOSP 17.1 version. In other words, if you need to upgrade or update your OpenStack version, perform the upgrade or update first, and then perform the ML2/OVS to ML2/OVN migration.
At least one IP address is available for each subnet pool.
The OVN mechanism driver creates a metadata port for each subnet. Each metadata port claims an IP address from the IP address pool.
- You have worked with your Red Hat Technical Account Manager or Global Professional Services to plan the migration and have filed a proactive support case. See How to submit a Proactive Case.
- If your ML2/OVS deployment uses VXLAN project networks, review the potential adjustments described in Section 2.3, “Lowering MTU for migration from a VXLAN OVS deployment”.
Procedure
Create an ML2/OVN stage deployment to obtain the baseline configuration of your target ML2/OVN deployment and test the feasibility of the target deployment.
Design the stage deployment with the same basic roles, routing, and topology as the planned post-migration production deployment. Save the full
openstack overcloud deploy
command, along with all deployment arguments, into a file calledovercloud-deploy.sh
. Also save any files referenced by theopenstack overcloud deploy
command, such as environment files. You need these files later in this procedure to configure the migration’s target ML2/OVN environment.NoteUse these files only for creation of the stage deployment and in the migration. Do not re-use them after the migration.
Install
openstack-neutron-ovn-migration-tool
.sudo dnf install openstack-neutron-ovn-migration-tool
- Copy the overcloud-deploy.sh script that you created in Step 1 and rename the copy to overcloud-migrate-ovn.sh. Confirm that all paths for the overcloud deploy command inside the overcloud-migrate-ovn.sh are still correct. You customize some arguments in the overcloud-migrate-ovn.sh script in subsequent steps.
Find your migration scenario in the following list and perform the appropriate steps to customize the
openstack deploy
command inovercloud-migrate-ovn.sh
.In the deployment command, pay careful attention to the order of the -e arguments that add environment files. The environment file with the generic defaults (such as neutron-ovn-dvr-ha.yaml) must precede the -e argument that specifies the file with custom network environment settings such as bridge mappings.
- Scenario 1: DVR to DVR, Compute nodes have connectivity to the external network
In
overcloud-migrate-ovn.sh
, add custom heat template file arguments to theopenstack overcloud deploy
command. Add them after the core template file arguments.The following command example uses the default
neutron-ovn-dvr-ha.yaml
heat template file. Your deployment might use multiple heat files to define your OVN environment. Add each with a separate-e
argument.openstack overcloud deploy \ --templates /usr/share/openstack-tripleo-heat-templates \ ... -e /usr/share/openstack-tripleo-heat-templates/environments/services/ \ neutron-ovn-dvr-ha.yaml
- Scenario 2: Centralized routing to centralized routing (no DVR)
-
If your deployment uses SR-IOV and other NFV features, in
overcloud-migrate-ovn.sh
, use-e
arguments to add the SR-IOV environment parameters to theopenstack overcloud deploy
command. Add the SR-IOV environment files after the core template environment file arguments and other custom environment file arguments. For an example of SR-IOV environment file, see/usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-sriov.yaml
. - Leave any custom network modifications the same as they were before migration.
-
If your deployment uses SR-IOV and other NFV features, in
- Scenario 3: Centralized routing to DVR, and Compute nodes connected to external networks through
br-ex
Ensure that Compute nodes are connected to the external network through the
br-ex
bridge. For example, in an environment file such ascompute-dvr.yaml
, set the following parameters. Then use-e
to add the environment file to theopenstack overcloud deploy
command in the scriptovercloud-migrate-ovn.sh
:type: ovs_bridge # Defaults to br-ex, anything else requires specific # bridge mapping entries for it to be used. name: bridge_name use_dhcp: false members: - type: interface name: nic3 # force the MAC address of the bridge to this interface primary: true
Add the following arguments at the end of the
overcloud deploy
command inovercloud-migrate-ovn.sh
:-e /usr/share/openstack-tripleo-heat-templates/environments/disable-container-manage-clean-orphans.yaml \ -e $HOME/ovn-extras.yaml
If
router
appears as a value forNeutronServicePlugins
orNeutronPluginExtensions
in any environment file or template, replace the valuerouter
withovn-router
. For example, intripleo-heat-templates/environments/services/neutron-ovn-dvr-ha.yaml
:parameter_defaults: NeutronServicePlugins: "ovn-router,trunk,qos,placement"
Ensure that all users have execution privileges on the file
overcloud-migrate-ovn.sh
. The script requires execution privileges during the migration process.$ chmod a+x ~/overcloud-migrate-ovn.sh
Use
export
commands to set the following migration-related environment variables. For example:$ export OVERCLOUDRC_FILE=~/myovercloudrc
- STACKRC_FILE
The
stackrc
file in your undercloud.Default: ~/stackrc
- OVERCLOUDRC_FILE
The
overcloudrc
file in your undercloud.Default: ~/overcloudrc
- OVERCLOUD_OVN_DEPLOY_SCRIPT
The deployment script.
Default: ~/overcloud-migrate-ovn.sh
- DHCP_RENEWAL_TIME
DHCP renewal time in seconds to configure in DHCP agent configuration file.
Default: 30
Ensure that you are in the
ovn-migration
directory and run the commandovn_migration.sh generate-inventory
to generate thehosts_for_migration
inventory file and theansible.cfg
file:$ ovn_migration.sh generate-inventory | sudo tee -a /var/log/ovn_migration_output.txt
Review the
hosts_for_migration
file for accuracy:- Ensure the lists match your environment.
- Ensure there are ovn controllers on each node.
- Ensure there are no list headings (such as [ovn-controllers]) that do not have list items under them.
-
From the
ovn-migration`
directory, run the command ansible -i hosts_for_migration -m ping all
(Optional) Back up the deployment to prepare a potential migration revert in the case that something unexpected happens during migration.
Use
export
commands to set the following environment variables if you plan to use the `ovn-migration.sh backup`command to back up the controller nodes:- BACKUP_MIGRATION_IP
The IP address of the server where backup is stored.
Default: 192.168.24.1
- BACKUP_MIGRATION_CTL_PLANE_CIDRS
A comma-separated string of control plane subnets in CIDR notation for all nodes that will be backed up.
Default: 192.168.24.0/24
You can see a list of all relevant environment variables in the beginning of /usr/bin/ovn_migraton.sh file.
- CONTROLLER_GROUP
Host group name used by Ansible to back up controllers.
Default:
Controller
If your controller group has a name other than
Controller
, export that name as the value ofCONTROLLER_GROUP
. For example, in SR-IOV environments, the controller group name might beControllerSriov
.- OVERCLOUD_OVS_REVERT_SCRIPT
Used to optionally revert from an unsuccessful OVN migration if you created the optional backup.
Default: ~/overcloud-revert-ovs.sh
Back up the control plane. Use the backup mechanism of your choice to back up the controller nodes. The supported choice is the default
ovn-migration.sh backup
command, which uses the ReaR backup tool.$ ovn_migration.sh backup
-
Back up the templates and environment files that were used to deploy the overcloud. The
ovn-migration.sh backup
command does not back up the overcloud. If you need to revert the controller nodes after a partial or failed migration, then you will need this backup to restore the OVS overcloud. Copy the script that you used to deploy the original RHOSP 17.1 ML2/OVS deployment. For example, the original script might be named
overcloud_deploy.sh
. Name the copyovercloud-revert-ovs.sh
.WarningIf
overcloud-revert-ovs.sh
creates a file, make sure to specify an absolute path to that file. For example, if you use the --log-file argument, specify the file with an absolute path. The migration revert playbook uses the variable $ANSIBLE_DIR (which defaults to /usr/share/ansible/neutron-ovn-migration). If your creates a file on a relative path, Ansible tries to write it in $ANSIBLE_DIR, where the revert user might not have adequate permissions.Create a file
/home/stack/ovs-extra.yml
with the following contents:parameter_defaults: ForceNeutronDriverUpdate: true
Ensure that the final environment file argument in
overcloud-revert-ovs.sh
is the following.-e /home/stack/ovs-extra.yml
-
Store
overcloud-revert-ovs.sh
securely. You will need it if you revert a failed migration. .
- Proceed to Section 2.2, “Preparing container images for migration of the ML2 mechanism driver from OVS to OVN”.
2.2. Preparing container images for migration of the ML2 mechanism driver from OVS to OVN
Environment assessment and preparation is critical to a successful migration. Your Red Hat Technical Account Manager or Global Professional Services will guide you through these steps.
Prerequisites
- You have completed the steps in Preparing the environment for migration of the ML2 mechanism driver from OVS to OVN
Procedure
Prepare the new container images for use after the migration to ML2/OVN.
Create
containers-prepare-parameter.yaml
file in the home directory if it is not present.$ test -f $HOME/containers-prepare-parameter.yaml || sudo openstack tripleo container image prepare default \ --output-env-file $HOME/containers-prepare-parameter.yaml
-
Verify that
containers-prepare-parameter.yaml
is present at the end of your $HOME/overcloud-migrate-ovn.sh and $HOME/overcloud-deploy.sh files. Change the neutron_driver in the
containers-prepare-parameter.yaml
file to ovn:$ sed -i -E 's/neutron_driver:([ ]\w+)/neutron_driver: ovn/' $HOME/containers-prepare-parameter.yaml
Verify the changes to the neutron_driver:
$ grep neutron_driver $HOME/containers-prepare-parameter.yaml neutron_driver: ovn
Update the images:
$ sudo openstack tripleo container image prepare \ --environment-file /home/stack/containers-prepare-parameter.yaml
NoteProvide the full path to your
containers-prepare-parameter.yaml
file. Otherwise, the command completes very quickly without updating the image list or providing an error message.
On the undercloud, validate the updated images.
. Log in to the undercloud as the user `stack` and source the stackrc file. $ source ~/stackrc $ openstack tripleo container image list | grep '\-ovn'
Your list should resemble the following example. It includes containers for the OVN databases, OVN controller, the metadata agent, and the neutron server agent.
docker://undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp17-openstack-ovn-northd:17.1_20240725.1 docker://undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp17-openstack-ovn-sb-db-server:17.1_20240725.1 docker://undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp17-openstack-ovn-controller:17.1_20240725.1 docker://undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp17-openstack-neutron-server-ovn:17.1_20240725.1 docker://undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp17-openstack-ovn-nb-db-server:17.1_20240725.1 docker://undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp17-openstack-neutron-metadata-agent-ovn:17.1_20240725.1
If you are migrating to the OVN mechanism driver in RHOSP 16.2, the listings resemble the following:
docker://undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp16-openstack-ovn-northd:16.2_20211110.2 docker://undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp16-openstack-ovn-sb-db-server:16.2_20211110.2 docker://undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp16-openstack-ovn-controller:16.2_20211110.2 docker://undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp16-openstack-neutron-server-ovn:16.2_20211110.2 docker://undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp16-openstack-ovn-nb-db-server:16.2_20211110.2 docker://undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp16-openstack-neutron-metadata-agent-ovn:16.2_20211110.2
If your original deployment uses VXLAN, you might need to adjust maximum transmission unit (MTU) values. Proceed to Section 2.3, “Lowering MTU for migration from a VXLAN OVS deployment”.
If your original deployment uses VLAN networks, you can skip the MTU adjustments and proceed to Section 2.4, “Migrating the ML2 mechanism driver from OVS to OVN”.
2.3. Lowering MTU for migration from a VXLAN OVS deployment
If your pre-migration OVS deployment uses the VXLAN tunneling protocol, you might need to reduce the network maximum transmission unit (MTU) by 8 bytes before migrating to OVN, which uses the Geneve tunneling protocol.
Consider performing this procedure in a dedicated maintenance window period before the migration.
VXLAN packets reserve 50 bytes of data for header content. This includes 42 bytes of standard outer headers plus an 8-byte VXLAN header. If the physical network uses the standard ethernet MTU of 1500 bytes, you can set the MTU on your VXLAN networks to 1450 and traffic can pass without fragmentation.
Geneve packets reserve 58 bytes of data for header content. This includes the 42 bytes of standard outer headers plus a 16-byte Geneve header. Thus, if the physical network has an MTU less than 1508, you must reduce the MTU on your pre-migration OpenStack VXLAN networks by 8 bytes to avoid the need for fragmentation.
If your physical network can transmit at least 58 bytes more than your OpenStack VXLAN network MTU without fragmentation, skip this procedure and proceed to Section 2.4, “Migrating the ML2 mechanism driver from OVS to OVN”. For example, you can skip this procedure if your physical network is configured for 9000-byte jumbo frames and your openstack network MTU is 8942 or less.
The RHOSP OVN migration tool automatically lowers the MTU by 8 bytes on VXLAN and GRE overcloud networks. In the following procedure, you use the tool to:
- increase the frequency of DHCP renewals by reducing the DHCP T1 timer to 30 seconds.
- reduce the MTU size on existing VXLAN networks by 8 bytes.
If your deployment does not use DHCP to configure all VM instances, you must manually reduce MTU on the excluded instances.
Prerequisites
- You have completed the steps in Section 2.1, “Preparing the environment for migration to the OVN mechanism driver”
- You have completed the steps in Section 2.2, “Preparing container images for migration of the ML2 mechanism driver from OVS to OVN”.
- Your pre-migration deployment is Red Hat OpenStack Platform (RHOSP) 17.1 or later with VXLAN.
Procedure
Run
ovn_migration.sh `reduce-dhcp-t1
. This lowers the T1 parameter of the internal neutron DHCP servers that configure thedhcp_renewal_time
in /var/lib/config-data/puppet-generated/neutron/etc/neutron/dhcp_agent.ini in all the nodes where DHCP agent is running.$ ovn_migration.sh reduce-dhcp-t1 | sudo tee -a /var/log/ovn_migration_output.txt
Verify that the T1 parameter has propagated to existing VMs. The process might take up to four hours.
- Log in to one of the Compute nodes.
Run
tcpdump`
over one of the VM taps attached to a project network.If T1 propagation is successful, expect to see requests occur approximately every 30 seconds:
[heat-admin@overcloud-novacompute-0 ~]$ sudo tcpdump -i tap52e872c2-e6 port 67 or port 68 -n tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on tap52e872c2-e6, link-type EN10MB (Ethernet), capture size 262144 bytes 13:17:28.954675 IP 192.168.99.5.bootpc > 192.168.99.3.bootps: BOOTP/DHCP, Request from fa:16:3e:6b:41:3d, length 300 13:17:28.961321 IP 192.168.99.3.bootps > 192.168.99.5.bootpc: BOOTP/DHCP, Reply, length 355 13:17:56.241156 IP 192.168.99.5.bootpc > 192.168.99.3.bootps: BOOTP/DHCP, Request from fa:16:3e:6b:41:3d, length 30013:17:56.249899 IP 192.168.99.3.bootps > 192.168.99.5.bootpc: BOOTP/DHCP, Reply, length 355
NoteThis verification is not possible with cirros VMs. The cirros
udhcpc`
implementation does not respond to DHCP option 58 (T1). Try this verification on a port that belongs to a full Linux VM. Red Hat recommends that you check all the different operating systems represented in your workloads, such as variants of Windows and Linux distributions.
- If any VM instances were not updated to reflect the change to the T1 parameter of DHCP, reboot them.
Lower the MTU of the pre-migration VXLAN networks:
$ ovn_migration.sh reduce-mtu | sudo tee -a /var/log/ovn_migration_output.txt
This step reduces the MTU network by network and tags the completed network with adapted_mtu. The tool acts only on VXLAN networks. This step will not change any values if your deployment has only VLAN project networks.
If you have any instances with static IP assignment on VXLAN project networks, manually reduce the instance MTU by 8 bytes. For example, if the VXLAN-based MTU was 1450, change it to 1442.
NotePerform this step only if you have manually provided static IP assignments and MTU settings on VXLAN project networks. By default, DHCP provides the IP assignment and MTU settings.
- Proceed to Section 2.4, “Migrating the ML2 mechanism driver from OVS to OVN”..
2.4. Migrating the ML2 mechanism driver from OVS to OVN
The ovn-migration script performs environmental setup, migration, and cleanup tasks related to the in-place migration of the ML2 mechanism driver from OVS to OVN.
Prerequisites
- You have completed the steps in Preparing the environment for migration of the ML2 mechanism driver from OVS to OVN
- If your original deployment uses VXLAN or GRE, you also completed the steps in Adjusting MTU for migration from the OVS mechanism driver to the OVN mechanism driver.
- You also completed all required migration steps through Preparing container images for migration from the OVS mechanism driver to the OVN mechanism driver.
Procedure
Stop all operations that interact with the Networking Service (neutron) API, such as creating new networks, subnets, routers, or instances, or migrating instances between compute nodes.
Interaction with Networking API during migration can cause undefined behavior. You can restart the API operations after completing the migration.
Run
ovn_migration.sh start-migration
to begin the migration process. Thetee
command creates a copy of the script output for troubleshooting purposes.$ ovn_migration.sh start-migration | sudo tee -a /var/log/ovn_migration_output.txt
Result
The script performs the following actions.
- Updates the overcloud stack to deploy OVN alongside reference implementation services using the temporary bridge br-migration instead of br-int. The temporary bridge helps to limit downtime during migration.
- Generates the OVN northbound database by running neutron-ovn-db-sync-util. The utility examines the Neutron database to create equivalent resources in the OVN northbound database.
- Re-assigns ovn-controller to br-int instead of br-migration.
Removes node resources that are not used by ML2/OVN, including the following.
- Cleans up network namespaces (fip, snat, qrouter, qdhcp).
-
Removes any unnecessary patch ports on
br-int
. -
Removes
br-tun
andbr-migration
ovs bridges. -
Deletes ports from
br-int
that begin withqr-
,ha-
, andqg-
(using neutron-netns-cleanup).
- Deletes Networking Service (neutron) agents and Networking Service HA internal networks from the database through the Networking Service API.