Chapter 2. Migrating the ML2 mechanism driver from OVS to OVN


2.1. Preparing the environment for migration to the OVN mechanism driver

Environment assessment and preparation is critical to a successful migration. Your Red Hat Technical Account Manager or Global Professional Services will guide you through these steps.

Prerequisites

  • Your deployment is the latest RHOSP 17.1 version. In other words, if you need to upgrade or update your OpenStack version, perform the upgrade or update first, and then perform the ML2/OVS to ML2/OVN migration.
  • At least one IP address is available for each subnet pool.

    The OVN mechanism driver creates a metadata port for each subnet. Each metadata port claims an IP address from the IP address pool.

  • You have worked with your Red Hat Technical Account Manager or Global Professional Services to plan the migration and have filed a proactive support case. See How to submit a Proactive Case.
  • If your ML2/OVS deployment uses VXLAN project networks, review the potential adjustments described in Section 2.3, “Lowering MTU for migration from a VXLAN OVS deployment”.

Procedure

  1. Create an ML2/OVN stage deployment to obtain the baseline configuration of your target ML2/OVN deployment and test the feasibility of the target deployment.

    Design the stage deployment with the same basic roles, routing, and topology as the planned post-migration production deployment. Save the full openstack overcloud deploy command, along with all deployment arguments, into a file called overcloud-deploy.sh. Also save any files referenced by the openstack overcloud deploy command, such as environment files. You need these files later in this procedure to configure the migration’s target ML2/OVN environment.

    Note

    Use these files only for creation of the stage deployment and in the migration. Do not re-use them after the migration.

  2. Install openstack-neutron-ovn-migration-tool.

    sudo dnf install openstack-neutron-ovn-migration-tool
  3. Copy the overcloud-deploy.sh script that you created in Step 1 and rename the copy to overcloud-migrate-ovn.sh. Confirm that all paths for the overcloud deploy command inside the overcloud-migrate-ovn.sh are still correct. You customize some arguments in the overcloud-migrate-ovn.sh script in subsequent steps.
  4. Find your migration scenario in the following list and perform the appropriate steps to customize the openstack deploy command in overcloud-migrate-ovn.sh.

    In the deployment command, pay careful attention to the order of the -e arguments that add environment files. The environment file with the generic defaults (such as neutron-ovn-dvr-ha.yaml) must precede the -e argument that specifies the file with custom network environment settings such as bridge mappings.

    Scenario 1: DVR to DVR, Compute nodes have connectivity to the external network
    • In overcloud-migrate-ovn.sh, add custom heat template file arguments to the openstack overcloud deploy command. Add them after the core template file arguments.

      The following command example uses the default neutron-ovn-dvr-ha.yaml heat template file. Your deployment might use multiple heat files to define your OVN environment. Add each with a separate -e argument.

      openstack overcloud deploy \
      --templates /usr/share/openstack-tripleo-heat-templates \
      ...
      -e /usr/share/openstack-tripleo-heat-templates/environments/services/ \
      neutron-ovn-dvr-ha.yaml
    Scenario 2: Centralized routing to centralized routing (no DVR)
    • If your deployment uses SR-IOV and other NFV features, in overcloud-migrate-ovn.sh, use -e arguments to add the SR-IOV environment parameters to the openstack overcloud deploy command. Add the SR-IOV environment files after the core template environment file arguments and other custom environment file arguments. For an example of SR-IOV environment file, see /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-sriov.yaml.
    • Leave any custom network modifications the same as they were before migration.
    Scenario 3: Centralized routing to DVR, and Compute nodes connected to external networks through br-ex
    • Ensure that Compute nodes are connected to the external network through the br-ex bridge. For example, in an environment file such as compute-dvr.yaml, set the following parameters. Then use -e to add the environment file to the openstack overcloud deploy command in the script overcloud-migrate-ovn.sh:

      type: ovs_bridge
          # Defaults to br-ex, anything else requires specific # bridge mapping entries for it to be used.
          name: bridge_name
          use_dhcp: false
          members:
           -
            type: interface
            name: nic3
            # force the MAC address of the bridge to this interface
            primary: true
  5. Add the following arguments at the end of the overcloud deploy command in overcloud-migrate-ovn.sh:

    -e /usr/share/openstack-tripleo-heat-templates/environments/disable-container-manage-clean-orphans.yaml \
    -e $HOME/ovn-extras.yaml
  6. If router appears as a value for NeutronServicePlugins or NeutronPluginExtensions in any environment file or template, replace the value router with ovn-router. For example, in tripleo-heat-templates/environments/services/neutron-ovn-dvr-ha.yaml:

    parameter_defaults:
       NeutronServicePlugins: "ovn-router,trunk,qos,placement"
  7. Ensure that all users have execution privileges on the file overcloud-migrate-ovn.sh. The script requires execution privileges during the migration process.

    $ chmod a+x ~/overcloud-migrate-ovn.sh
  8. Use export commands to set the following migration-related environment variables. For example:

    $ export OVERCLOUDRC_FILE=~/myovercloudrc
    STACKRC_FILE

    The stackrc file in your undercloud.

    Default: ~/stackrc

    OVERCLOUDRC_FILE

    The overcloudrc file in your undercloud.

    Default: ~/overcloudrc

    OVERCLOUD_OVN_DEPLOY_SCRIPT

    The deployment script.

    Default: ~/overcloud-migrate-ovn.sh

    DHCP_RENEWAL_TIME

    DHCP renewal time in seconds to configure in DHCP agent configuration file.

    Default: 30

  9. Ensure that you are in the ovn-migration directory and run the command ovn_migration.sh generate-inventory to generate the hosts_for_migration inventory file and the ansible.cfg file:

    $ ovn_migration.sh generate-inventory   | sudo tee -a /var/log/ovn_migration_output.txt
  10. Review the hosts_for_migration file for accuracy:

    • Ensure the lists match your environment.
    • Ensure there are ovn controllers on each node.
    • Ensure there are no list headings (such as [ovn-controllers]) that do not have list items under them.
    • From the ovn-migration` directory, run the command ansible -i hosts_for_migration -m ping all
  11. (Optional) Back up the deployment to prepare a potential migration revert in the case that something unexpected happens during migration.

    1. Use export commands to set the following environment variables if you plan to use the `ovn-migration.sh backup`command to back up the controller nodes:

      BACKUP_MIGRATION_IP

      The IP address of the server where backup is stored.

      Default: 192.168.24.1

      BACKUP_MIGRATION_CTL_PLANE_CIDRS

      A comma-separated string of control plane subnets in CIDR notation for all nodes that will be backed up.

      Default: 192.168.24.0/24

      You can see a list of all relevant environment variables in the beginning of /usr/bin/ovn_migraton.sh file.

      CONTROLLER_GROUP

      Host group name used by Ansible to back up controllers.

      Default: Controller

      If your controller group has a name other than Controller, export that name as the value of CONTROLLER_GROUP. For example, in SR-IOV environments, the controller group name might be ControllerSriov.

      OVERCLOUD_OVS_REVERT_SCRIPT

      Used to optionally revert from an unsuccessful OVN migration if you created the optional backup.

      Default: ~/overcloud-revert-ovs.sh

    2. Back up the control plane. Use the backup mechanism of your choice to back up the controller nodes. The supported choice is the default ovn-migration.sh backup command, which uses the ReaR backup tool.

      $ ovn_migration.sh backup
    3. Back up the templates and environment files that were used to deploy the overcloud. The ovn-migration.sh backup command does not back up the overcloud. If you need to revert the controller nodes after a partial or failed migration, then you will need this backup to restore the OVS overcloud.
    4. Copy the script that you used to deploy the original RHOSP 17.1 ML2/OVS deployment. For example, the original script might be named overcloud_deploy.sh. Name the copy overcloud-revert-ovs.sh.

      Warning

      If overcloud-revert-ovs.sh creates a file, make sure to specify an absolute path to that file. For example, if you use the --log-file argument, specify the file with an absolute path. The migration revert playbook uses the variable $ANSIBLE_DIR (which defaults to /usr/share/ansible/neutron-ovn-migration). If your creates a file on a relative path, Ansible tries to write it in $ANSIBLE_DIR, where the revert user might not have adequate permissions.

    5. Create a file /home/stack/ovs-extra.yml with the following contents:

      parameter_defaults:
        ForceNeutronDriverUpdate: true
    6. Ensure that the final environment file argument in overcloud-revert-ovs.sh is the following.

      -e /home/stack/ovs-extra.yml
    7. Store overcloud-revert-ovs.sh securely. You will need it if you revert a failed migration. .
  12. Proceed to Section 2.2, “Preparing container images for migration of the ML2 mechanism driver from OVS to OVN”.

2.2. Preparing container images for migration of the ML2 mechanism driver from OVS to OVN

Environment assessment and preparation is critical to a successful migration. Your Red Hat Technical Account Manager or Global Professional Services will guide you through these steps.

Procedure

  1. Prepare the new container images for use after the migration to ML2/OVN.

    1. Create containers-prepare-parameter.yaml file in the home directory if it is not present.

      $ test -f $HOME/containers-prepare-parameter.yaml || sudo openstack tripleo container image prepare default \
      --output-env-file $HOME/containers-prepare-parameter.yaml
    2. Verify that containers-prepare-parameter.yaml is present at the end of your $HOME/overcloud-migrate-ovn.sh and $HOME/overcloud-deploy.sh files.
    3. Change the neutron_driver in the containers-prepare-parameter.yaml file to ovn:

      $ sed -i -E 's/neutron_driver:([ ]\w+)/neutron_driver: ovn/' $HOME/containers-prepare-parameter.yaml
    4. Verify the changes to the neutron_driver:

      $ grep neutron_driver $HOME/containers-prepare-parameter.yaml
      neutron_driver: ovn
    5. Update the images:

      $ sudo openstack tripleo container image prepare \
      --environment-file /home/stack/containers-prepare-parameter.yaml
      Note

      Provide the full path to your containers-prepare-parameter.yaml file. Otherwise, the command completes very quickly without updating the image list or providing an error message.

  2. On the undercloud, validate the updated images.

    . Log in to the undercloud as the user `stack` and source the stackrc file.
    $ source ~/stackrc
    $ openstack tripleo container image list | grep  '\-ovn'

    Your list should resemble the following example. It includes containers for the OVN databases, OVN controller, the metadata agent, and the neutron server agent.

    docker://undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp17-openstack-ovn-northd:17.1_20240725.1
    docker://undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp17-openstack-ovn-sb-db-server:17.1_20240725.1
    docker://undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp17-openstack-ovn-controller:17.1_20240725.1
    docker://undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp17-openstack-neutron-server-ovn:17.1_20240725.1
    docker://undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp17-openstack-ovn-nb-db-server:17.1_20240725.1
    docker://undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp17-openstack-neutron-metadata-agent-ovn:17.1_20240725.1

    If you are migrating to the OVN mechanism driver in RHOSP 16.2, the listings resemble the following:

    docker://undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp16-openstack-ovn-northd:16.2_20211110.2
    docker://undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp16-openstack-ovn-sb-db-server:16.2_20211110.2
    docker://undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp16-openstack-ovn-controller:16.2_20211110.2
    docker://undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp16-openstack-neutron-server-ovn:16.2_20211110.2
    docker://undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp16-openstack-ovn-nb-db-server:16.2_20211110.2
    docker://undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp16-openstack-neutron-metadata-agent-ovn:16.2_20211110.2
  3. If your original deployment uses VXLAN, you might need to adjust maximum transmission unit (MTU) values. Proceed to Section 2.3, “Lowering MTU for migration from a VXLAN OVS deployment”.

    If your original deployment uses VLAN networks, you can skip the MTU adjustments and proceed to Section 2.4, “Migrating the ML2 mechanism driver from OVS to OVN”.

2.3. Lowering MTU for migration from a VXLAN OVS deployment

If your pre-migration OVS deployment uses the VXLAN tunneling protocol, you might need to reduce the network maximum transmission unit (MTU) by 8 bytes before migrating to OVN, which uses the Geneve tunneling protocol.

Note

Consider performing this procedure in a dedicated maintenance window period before the migration.

VXLAN packets reserve 50 bytes of data for header content. This includes 42 bytes of standard outer headers plus an 8-byte VXLAN header. If the physical network uses the standard ethernet MTU of 1500 bytes, you can set the MTU on your VXLAN networks to 1450 and traffic can pass without fragmentation.

Geneve packets reserve 58 bytes of data for header content. This includes the 42 bytes of standard outer headers plus a 16-byte Geneve header. Thus, if the physical network has an MTU less than 1508, you must reduce the MTU on your pre-migration OpenStack VXLAN networks by 8 bytes to avoid the need for fragmentation.

Note

If your physical network can transmit at least 58 bytes more than your OpenStack VXLAN network MTU without fragmentation, skip this procedure and proceed to Section 2.4, “Migrating the ML2 mechanism driver from OVS to OVN”. For example, you can skip this procedure if your physical network is configured for 9000-byte jumbo frames and your openstack network MTU is 8942 or less.

The RHOSP OVN migration tool automatically lowers the MTU by 8 bytes on VXLAN and GRE overcloud networks. In the following procedure, you use the tool to:

  • increase the frequency of DHCP renewals by reducing the DHCP T1 timer to 30 seconds.
  • reduce the MTU size on existing VXLAN networks by 8 bytes.

If your deployment does not use DHCP to configure all VM instances, you must manually reduce MTU on the excluded instances.

Prerequisites

Procedure

  1. Run ovn_migration.sh `reduce-dhcp-t1. This lowers the T1 parameter of the internal neutron DHCP servers that configure the dhcp_renewal_time in /var/lib/config-data/puppet-generated/neutron/etc/neutron/dhcp_agent.ini in all the nodes where DHCP agent is running.

    $ ovn_migration.sh reduce-dhcp-t1   | sudo tee -a /var/log/ovn_migration_output.txt
  2. Verify that the T1 parameter has propagated to existing VMs. The process might take up to four hours.

    • Log in to one of the Compute nodes.
    • Run tcpdump` over one of the VM taps attached to a project network.

      If T1 propagation is successful, expect to see requests occur approximately every 30 seconds:

      [heat-admin@overcloud-novacompute-0 ~]$ sudo tcpdump -i tap52e872c2-e6 port 67 or port 68 -n
      tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
      listening on tap52e872c2-e6, link-type EN10MB (Ethernet), capture size 262144 bytes
      13:17:28.954675 IP 192.168.99.5.bootpc > 192.168.99.3.bootps: BOOTP/DHCP, Request from fa:16:3e:6b:41:3d, length 300
      13:17:28.961321 IP 192.168.99.3.bootps > 192.168.99.5.bootpc: BOOTP/DHCP, Reply, length 355
      13:17:56.241156 IP 192.168.99.5.bootpc > 192.168.99.3.bootps: BOOTP/DHCP, Request from fa:16:3e:6b:41:3d, length 30013:17:56.249899 IP 192.168.99.3.bootps > 192.168.99.5.bootpc: BOOTP/DHCP, Reply, length 355
      Note

      This verification is not possible with cirros VMs. The cirros udhcpc` implementation does not respond to DHCP option 58 (T1). Try this verification on a port that belongs to a full Linux VM. Red Hat recommends that you check all the different operating systems represented in your workloads, such as variants of Windows and Linux distributions.

  3. If any VM instances were not updated to reflect the change to the T1 parameter of DHCP, reboot them.
  4. Lower the MTU of the pre-migration VXLAN networks:

    $ ovn_migration.sh reduce-mtu   | sudo tee -a /var/log/ovn_migration_output.txt

    This step reduces the MTU network by network and tags the completed network with adapted_mtu. The tool acts only on VXLAN networks. This step will not change any values if your deployment has only VLAN project networks.

  5. If you have any instances with static IP assignment on VXLAN project networks, manually reduce the instance MTU by 8 bytes. For example, if the VXLAN-based MTU was 1450, change it to 1442.

    Note

    Perform this step only if you have manually provided static IP assignments and MTU settings on VXLAN project networks. By default, DHCP provides the IP assignment and MTU settings.

  6. Proceed to Section 2.4, “Migrating the ML2 mechanism driver from OVS to OVN”..

2.4. Migrating the ML2 mechanism driver from OVS to OVN

The ovn-migration script performs environmental setup, migration, and cleanup tasks related to the in-place migration of the ML2 mechanism driver from OVS to OVN.

Prerequisites

Procedure

  1. Stop all operations that interact with the Networking Service (neutron) API, such as creating new networks, subnets, routers, or instances, or migrating instances between compute nodes.

    Interaction with Networking API during migration can cause undefined behavior. You can restart the API operations after completing the migration.

  2. Run ovn_migration.sh start-migration to begin the migration process. The tee command creates a copy of the script output for troubleshooting purposes.

    $ ovn_migration.sh start-migration  | sudo tee -a /var/log/ovn_migration_output.txt

Result

The script performs the following actions.

  • Updates the overcloud stack to deploy OVN alongside reference implementation services using the temporary bridge br-migration instead of br-int. The temporary bridge helps to limit downtime during migration.
  • Generates the OVN northbound database by running neutron-ovn-db-sync-util. The utility examines the Neutron database to create equivalent resources in the OVN northbound database.
  • Re-assigns ovn-controller to br-int instead of br-migration.
  • Removes node resources that are not used by ML2/OVN, including the following.

    • Cleans up network namespaces (fip, snat, qrouter, qdhcp).
    • Removes any unnecessary patch ports on br-int.
    • Removes br-tun and br-migration ovs bridges.
    • Deletes ports from br-int that begin with qr-, ha-, and qg- (using neutron-netns-cleanup).
  • Deletes Networking Service (neutron) agents and Networking Service HA internal networks from the database through the Networking Service API.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.