Este contenido no está disponible en el idioma seleccionado.

Chapter 7. OVN-Kubernetes network plugin


7.1. About the OVN-Kubernetes network plugin

The OpenShift Dedicated cluster uses a virtualized network for pod and service networks.

Part of Red Hat OpenShift Networking, the OVN-Kubernetes network plugin is the default network provider for OpenShift Dedicated. OVN-Kubernetes is based on Open Virtual Network (OVN) and provides an overlay-based networking implementation. A cluster that uses the OVN-Kubernetes plugin also runs Open vSwitch (OVS) on each node. OVN configures OVS on each node to implement the declared network configuration.

Note

OVN-Kubernetes is the default networking solution for OpenShift Dedicated and single-node OpenShift deployments.

OVN-Kubernetes, which arose from the OVS project, uses many of the same constructs, such as open flow rules, to decide how packets travel through the network. For more information, see the Open Virtual Network website.

OVN-Kubernetes is a series of daemons for OVS that transform virtual network configurations into OpenFlow rules. OpenFlow is a protocol for communicating with network switches and routers, providing a means for remotely controlling the flow of network traffic on a network device. This means that network administrators can configure, manage, and watch the flow of network traffic.

OVN-Kubernetes provides more of the advanced functionality not available with OpenFlow. OVN supports distributed virtual routing, distributed logical switches, access control, Dynamic Host Configuration Protocol (DHCP), and DNS. OVN implements distributed virtual routing within logic flows that equate to open flows. For example, if you have a pod that sends out a DHCP request to the DHCP server on the network, a logic flow rule in the request helps the OVN-Kubernetes handle the packet. This means that the server can respond with gateway, DNS server, IP address, and other information.

OVN-Kubernetes runs a daemon on each node. There are daemon sets for the databases and for the OVN controller that run on every node. The OVN controller programs the Open vSwitch daemon on the nodes to support the following network provider features:

  • Egress IPs
  • Firewalls
  • Hardware offloading
  • Hybrid networking
  • Internet Protocol Security (IPsec) encryption
  • IPv6
  • Multicast.
  • Network policy and network policy logs
  • Routers

7.1.1. OVN-Kubernetes purpose

The OVN-Kubernetes network plugin is an open-source, fully-featured Kubernetes CNI plugin that uses Open Virtual Network (OVN) to manage network traffic flows. OVN is a community developed, vendor-agnostic network virtualization solution. The OVN-Kubernetes network plugin uses the following technologies:

  • OVN to manage network traffic flows.
  • Kubernetes network policy support and logs, including ingress and egress rules.
  • The Generic Network Virtualization Encapsulation (Geneve) protocol, rather than Virtual Extensible LAN (VXLAN), to create an overlay network between nodes.

The OVN-Kubernetes network plugin supports the following capabilities:

  • Hybrid clusters that can run both Linux and Microsoft Windows workloads. This environment is known as hybrid networking.
  • Offloading of network data processing from the host central processing unit (CPU) to compatible network cards and data processing units (DPUs). This is known as hardware offloading.
  • IPv4-primary dual-stack networking on bare-metal, VMware vSphere, IBM Power®, IBM Z®, and Red Hat OpenStack Platform (RHOSP) platforms.
  • IPv6 single-stack networking on RHOSP and bare metal platforms.
  • IPv6-primary dual-stack networking for a cluster running on a bare-metal, a VMware vSphere, or an RHOSP platform.
  • Egress firewall devices and egress IP addresses.
  • Egress router devices that operate in redirect mode.
  • IPsec encryption of intracluster communications.

Red Hat does not support the following postinstallation configurations that use the OVN-Kubernetes network plugin:

  • Configuring the primary network interface, including using the NMState Operator to configure bonding for the interface.
  • Configuring a sub-interface or additional network interface on a network device that uses the Open vSwitch (OVS) or an OVN-Kubernetes br-ex bridge network.
  • Creating additional virtual local area networks (VLANs) on the primary network interface.
  • Using the primary network interface, such as eth0 or bond0, that you created for a node during cluster installation to create additional secondary networks.

Red Hat does support the following postinstallation configurations that use the OVN-Kubernetes network plugin:

  • Creating additional VLANs from the base physical interface, such as eth0.100, where you configured the primary network interface as a VLAN for a node during cluster installation. This works because the Open vSwitch (OVS) bridge attaches to the initial VLAN sub-interface, such as eth0.100, leaving the base physical interface available for new configurations.
  • Creating an additional OVN secondary network with a localnet topology network requires that you define the secondary network in a NodeNetworkConfigurationPolicy (NNCP) object. After you create the network, pods or virtual machines (VMs) can then attach to the network. These secondary networks give a dedicated connection to the physical network, which might or might not use VLAN tagging. You cannot access these networks from the host network of a node where the host does not have the required setup, such as the required network settings.

7.1.2. OVN-Kubernetes IPv6 and dual-stack limitations

The OVN-Kubernetes network plugin has the following limitations:

  • For clusters configured for dual-stack networking, both IPv4 and IPv6 traffic must use the same network interface as the default gateway.

    If this requirement is not met, pods on the host in the ovnkube-node daemon set enter the CrashLoopBackOff state.

    If you display a pod with a command such as oc get pod -n openshift-ovn-kubernetes -l app=ovnkube-node -o yaml, the status field has more than one message about the default gateway, as shown in the following output:

    I1006 16:09:50.985852   60651 helper_linux.go:73] Found default gateway interface br-ex 192.168.127.1
    I1006 16:09:50.985923   60651 helper_linux.go:73] Found default gateway interface ens4 fe80::5054:ff:febe:bcd4
    F1006 16:09:50.985939   60651 ovnkube.go:130] multiple gateway interfaces detected: br-ex ens4
    Copy to Clipboard Toggle word wrap

    The only resolution is to reconfigure the host networking so that both IP families use the same network interface for the default gateway.

  • For clusters configured for dual-stack networking, both the IPv4 and IPv6 routing tables must contain the default gateway.

    If this requirement is not met, pods on the host in the ovnkube-node daemon set enter the CrashLoopBackOff state.

    If you display a pod with a command such as oc get pod -n openshift-ovn-kubernetes -l app=ovnkube-node -o yaml, the status field has more than one message about the default gateway, as shown in the following output:

    I0512 19:07:17.589083  108432 helper_linux.go:74] Found default gateway interface br-ex 192.168.123.1
    F0512 19:07:17.589141  108432 ovnkube.go:133] failed to get default gateway interface
    Copy to Clipboard Toggle word wrap

    The only resolution is to reconfigure the host networking so that both IP families contain the default gateway.

  • If you set the ipv6.disable parameter to 1 in the kernelArgument section of the MachineConfig custom resource (CR) for your cluster, OVN-Kubernetes pods enter a CrashLoopBackOff state. Additionally, updating your cluster to a later version of OpenShift Dedicated fails because the Network Operator remains on a Degraded state. Red Hat does not support disabling IPv6 adddresses for your cluster so do not set the ipv6.disable parameter to 1.

7.1.3. Session affinity

Session affinity is a feature that applies to Kubernetes Service objects. You can use session affinity if you want to ensure that each time you connect to a <service_VIP>:<Port>, the traffic is always load balanced to the same back end. For more information, including how to set session affinity based on a client’s IP address, see Session affinity.

Stickiness timeout for session affinity

The OVN-Kubernetes network plugin for OpenShift Dedicated calculates the stickiness timeout for a session from a client based on the last packet. For example, if you run a curl command 10 times, the sticky session timer starts from the tenth packet not the first. As a result, if the client is continuously contacting the service, then the session never times out. The timeout starts when the service has not received a packet for the amount of time set by the timeoutSeconds parameter.

As an OpenShift Dedicated cluster administrator, you can initiate the migration from the OpenShift SDN network plugin to the OVN-Kubernetes network plugin and verify the migration status using the OCM CLI.

Some considerations before starting migration initiation are:

  • The cluster version must be 4.16.43 and above.
  • The migration process cannot be interrupted.
  • Migrating back to the SDN network plugin is not possible.
  • Cluster nodes will be rebooted during migration.
  • There will be no impact to workloads that are resilient to node disruptions.
  • Migration time can vary between several minutes and hours, depending on the cluster size and workload configurations.
Warning

You can only initiate migration on clusters that are version 4.16.43 and above.

Important

OpenShift Cluster Manager API command-line interface (ocm) is a Developer Preview feature only. For more information about the support scope of Red Hat Developer Preview features, see Developer Preview Support Scope.

Procedure

  1. Create a JSON file with the following content:

    {
      "type": "sdnToOvn"
    }
    Copy to Clipboard Toggle word wrap
    • Optional: Within the JSON file, you can configure internal subnets using any or all of the options join, masquerade, and transit, along with a single CIDR per option, as shown in the following example:

      {
        "type": "sdnToOvn",
        "sdn_to_ovn": {
          "transit_ipv4": "192.168.255.0/24",
          "join_ipv4": "192.168.255.0/24",
          "masquerade_ipv4": "192.168.255.0/24"
        }
      }
      Copy to Clipboard Toggle word wrap
      Note

      OVN-Kubernetes reserves the following IP address ranges:

      100.64.0.0/16. This IP address range is used for the internalJoinSubnet parameter of OVN-Kubernetes by default.

      100.88.0.0/16. This IP address range is used for the internalTransSwitchSubnet parameter of OVN-Kubernetes by default.

      If these IP addresses have been used by OpenShift SDN or any external networks that might communicate with this cluster, you must patch them to use a different IP address range before initiating the limited live migration. For more information, see Patching OVN-Kubernetes address ranges in the Additional resources section.

  2. To initiate the migration, run the following post request in a terminal window:

    $ ocm post /api/clusters_mgmt/v1/clusters/{cluster_id}/migrations 
    1
    
      --body=myjsonfile.json 
    2
    Copy to Clipboard Toggle word wrap
    1
    Replace {cluster_id} with the ID of the cluster you want to migrate to the OVN-Kubernetes network plugin.
    2
    Replace myjsonfile.json with the name of the JSON file you created in the previous step.

    Example output

    {
      "kind": "ClusterMigration",
      "href": "/api/clusters_mgmt/v1/clusters/2gnts65ra30sclb114p8qdc26g5c8o3e/migrations/2gois8j244rs0qrfu9ti2o790jssgh9i",
      "id": "7sois8j244rs0qrhu9ti2o790jssgh9i",
      "cluster_id": "2gnts65ra30sclb114p8qdc26g5c8o3e",
      "type": "sdnToOvn",
      "state": {
        "value": "scheduled",
        "description": ""
      },
      "sdn_to_ovn": {
        "transit_ipv4": "100.65.0.0/16",
        "join_ipv4": "100.66.0.0/16"
      },
      "creation_timestamp": "2025-02-05T14:56:34.878467542Z",
      "updated_timestamp": "2025-02-05T14:56:34.878467542Z"
    }
    Copy to Clipboard Toggle word wrap

Verification

  • To check the status of the migration, run the following command:

    $ ocm get cluster <cluster_id>/migrations 
    1
    Copy to Clipboard Toggle word wrap
    1
    Replace <cluster_id> with the ID of the cluster that the migration was applied to.
Volver arriba
Red Hat logoGithubredditYoutubeTwitter

Aprender

Pruebe, compre y venda

Comunidades

Acerca de la documentación de Red Hat

Ayudamos a los usuarios de Red Hat a innovar y alcanzar sus objetivos con nuestros productos y servicios con contenido en el que pueden confiar. Explore nuestras recientes actualizaciones.

Hacer que el código abierto sea más inclusivo

Red Hat se compromete a reemplazar el lenguaje problemático en nuestro código, documentación y propiedades web. Para más detalles, consulte el Blog de Red Hat.

Acerca de Red Hat

Ofrecemos soluciones reforzadas que facilitan a las empresas trabajar en plataformas y entornos, desde el centro de datos central hasta el perímetro de la red.

Theme

© 2025 Red Hat