Este conteúdo não está disponível no idioma selecionado.
OVN-Kubernetes network plugin
In-depth configuration and troubleshooting for the OVN-Kubernetes network plugin in OpenShift Dedicated
Abstract
Chapter 1. About the OVN-Kubernetes network plugin Copiar o linkLink copiado para a área de transferência!
The OpenShift Dedicated cluster uses a virtualized network for pod and service networks.
Part of Red Hat OpenShift Networking, the OVN-Kubernetes network plugin is the default network provider for OpenShift Dedicated. OVN-Kubernetes is based on Open Virtual Network (OVN) and provides an overlay-based networking implementation. A cluster that uses the OVN-Kubernetes plugin also runs Open vSwitch (OVS) on each node. OVN configures OVS on each node to implement the declared network configuration.
OVN-Kubernetes is the default networking solution for OpenShift Dedicated and single-node OpenShift deployments.
OVN-Kubernetes, which arose from the OVS project, uses many of the same constructs, such as open flow rules, to decide how packets travel through the network. For more information, see the Open Virtual Network website.
OVN-Kubernetes is a series of daemons for OVS that transform virtual network configurations into OpenFlow
rules. OpenFlow
is a protocol for communicating with network switches and routers, providing a means for remotely controlling the flow of network traffic on a network device. This means that network administrators can configure, manage, and watch the flow of network traffic.
OVN-Kubernetes provides more of the advanced functionality not available with OpenFlow
. OVN supports distributed virtual routing, distributed logical switches, access control, Dynamic Host Configuration Protocol (DHCP), and DNS. OVN implements distributed virtual routing within logic flows that equate to open flows. For example, if you have a pod that sends out a DHCP request to the DHCP server on the network, a logic flow rule in the request helps the OVN-Kubernetes handle the packet. This means that the server can respond with gateway, DNS server, IP address, and other information.
OVN-Kubernetes runs a daemon on each node. There are daemon sets for the databases and for the OVN controller that run on every node. The OVN controller programs the Open vSwitch daemon on the nodes to support the following network provider features:
- Egress IPs
- Firewalls
- Hardware offloading
- Hybrid networking
- Internet Protocol Security (IPsec) encryption
- IPv6
- Multicast.
- Network policy and network policy logs
- Routers
1.1. OVN-Kubernetes purpose Copiar o linkLink copiado para a área de transferência!
The OVN-Kubernetes network plugin is an open-source, fully-featured Kubernetes CNI plugin that uses Open Virtual Network (OVN) to manage network traffic flows. OVN is a community developed, vendor-agnostic network virtualization solution. The OVN-Kubernetes network plugin uses the following technologies:
- OVN to manage network traffic flows.
- Kubernetes network policy support and logs, including ingress and egress rules.
- The Generic Network Virtualization Encapsulation (Geneve) protocol, rather than Virtual Extensible LAN (VXLAN), to create an overlay network between nodes.
The OVN-Kubernetes network plugin supports the following capabilities:
- Hybrid clusters that can run both Linux and Microsoft Windows workloads. This environment is known as hybrid networking.
- Offloading of network data processing from the host central processing unit (CPU) to compatible network cards and data processing units (DPUs). This is known as hardware offloading.
- IPv4-primary dual-stack networking on bare-metal, VMware vSphere, IBM Power®, IBM Z®, and Red Hat OpenStack Platform (RHOSP) platforms.
- IPv6 single-stack networking on RHOSP and bare metal platforms.
- IPv6-primary dual-stack networking for a cluster running on a bare-metal, a VMware vSphere, or an RHOSP platform.
- Egress firewall devices and egress IP addresses.
- Egress router devices that operate in redirect mode.
- IPsec encryption of intracluster communications.
Red Hat does not support the following postinstallation configurations that use the OVN-Kubernetes network plugin:
- Configuring the primary network interface, including using the NMState Operator to configure bonding for the interface.
-
Configuring a sub-interface or additional network interface on a network device that uses the Open vSwitch (OVS) or an OVN-Kubernetes
br-ex
bridge network. - Creating additional virtual local area networks (VLANs) on the primary network interface.
-
Using the primary network interface, such as
eth0
orbond0
, that you created for a node during cluster installation to create additional secondary networks.
Red Hat does support the following postinstallation configurations that use the OVN-Kubernetes network plugin:
-
Creating additional VLANs from the base physical interface, such as
eth0.100
, where you configured the primary network interface as a VLAN for a node during cluster installation. This works because the Open vSwitch (OVS) bridge attaches to the initial VLAN sub-interface, such aseth0.100
, leaving the base physical interface available for new configurations. -
Creating an additional OVN secondary network with a
localnet
topology network requires that you define the secondary network in aNodeNetworkConfigurationPolicy
(NNCP) object. After you create the network, pods or virtual machines (VMs) can then attach to the network. These secondary networks give a dedicated connection to the physical network, which might or might not use VLAN tagging. You cannot access these networks from the host network of a node where the host does not have the required setup, such as the required network settings.
1.2. OVN-Kubernetes IPv6 and dual-stack limitations Copiar o linkLink copiado para a área de transferência!
The OVN-Kubernetes network plugin has the following limitations:
For clusters configured for dual-stack networking, both IPv4 and IPv6 traffic must use the same network interface as the default gateway.
If this requirement is not met, pods on the host in the
ovnkube-node
daemon set enter theCrashLoopBackOff
state.If you display a pod with a command such as
oc get pod -n openshift-ovn-kubernetes -l app=ovnkube-node -o yaml
, thestatus
field has more than one message about the default gateway, as shown in the following output:I1006 16:09:50.985852 60651 helper_linux.go:73] Found default gateway interface br-ex 192.168.127.1 I1006 16:09:50.985923 60651 helper_linux.go:73] Found default gateway interface ens4 fe80::5054:ff:febe:bcd4 F1006 16:09:50.985939 60651 ovnkube.go:130] multiple gateway interfaces detected: br-ex ens4
I1006 16:09:50.985852 60651 helper_linux.go:73] Found default gateway interface br-ex 192.168.127.1 I1006 16:09:50.985923 60651 helper_linux.go:73] Found default gateway interface ens4 fe80::5054:ff:febe:bcd4 F1006 16:09:50.985939 60651 ovnkube.go:130] multiple gateway interfaces detected: br-ex ens4
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The only resolution is to reconfigure the host networking so that both IP families use the same network interface for the default gateway.
For clusters configured for dual-stack networking, both the IPv4 and IPv6 routing tables must contain the default gateway.
If this requirement is not met, pods on the host in the
ovnkube-node
daemon set enter theCrashLoopBackOff
state.If you display a pod with a command such as
oc get pod -n openshift-ovn-kubernetes -l app=ovnkube-node -o yaml
, thestatus
field has more than one message about the default gateway, as shown in the following output:I0512 19:07:17.589083 108432 helper_linux.go:74] Found default gateway interface br-ex 192.168.123.1 F0512 19:07:17.589141 108432 ovnkube.go:133] failed to get default gateway interface
I0512 19:07:17.589083 108432 helper_linux.go:74] Found default gateway interface br-ex 192.168.123.1 F0512 19:07:17.589141 108432 ovnkube.go:133] failed to get default gateway interface
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The only resolution is to reconfigure the host networking so that both IP families contain the default gateway.
-
If you set the
ipv6.disable
parameter to1
in thekernelArgument
section of theMachineConfig
custom resource (CR) for your cluster, OVN-Kubernetes pods enter aCrashLoopBackOff
state. Additionally, updating your cluster to a later version of OpenShift Dedicated fails because the Network Operator remains on aDegraded
state. Red Hat does not support disabling IPv6 adddresses for your cluster so do not set theipv6.disable
parameter to1
.
1.3. Session affinity Copiar o linkLink copiado para a área de transferência!
Session affinity is a feature that applies to Kubernetes Service
objects. You can use session affinity if you want to ensure that each time you connect to a <service_VIP>:<Port>, the traffic is always load balanced to the same back end. For more information, including how to set session affinity based on a client’s IP address, see Session affinity.
1.3.1. Stickiness timeout for session affinity Copiar o linkLink copiado para a área de transferência!
The OVN-Kubernetes network plugin for OpenShift Dedicated calculates the stickiness timeout for a session from a client based on the last packet. For example, if you run a curl
command 10 times, the sticky session timer starts from the tenth packet not the first. As a result, if the client is continuously contacting the service, then the session never times out. The timeout starts when the service has not received a packet for the amount of time set by the timeoutSeconds
parameter.
Chapter 2. Migrating from OpenShift SDN network plugin to OVN-Kubernetes network plugin Copiar o linkLink copiado para a área de transferência!
As an OpenShift Dedicated cluster administrator, you can initiate the migration from the OpenShift SDN network plugin to the OVN-Kubernetes network plugin and verify the migration status using the OCM CLI.
Some considerations before starting migration initiation are:
- The cluster version must be 4.16.43 and above.
- The migration process cannot be interrupted.
- Migrating back to the SDN network plugin is not possible.
- Cluster nodes will be rebooted during migration.
- There will be no impact to workloads that are resilient to node disruptions.
- Migration time can vary between several minutes and hours, depending on the cluster size and workload configurations.
2.1. Initiating migration using the OpenShift Cluster Manager API command-line interface (ocm) CLI Copiar o linkLink copiado para a área de transferência!
You can only initiate migration on clusters that are version 4.16.43 and above.
Prerequisites
-
You installed the OpenShift Cluster Manager API command-line interface (
ocm
).
OpenShift Cluster Manager API command-line interface (ocm
) is a Developer Preview feature only. For more information about the support scope of Red Hat Developer Preview features, see Developer Preview Support Scope.
Procedure
Create a JSON file with the following content:
{ "type": "sdnToOvn" }
{ "type": "sdnToOvn" }
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Within the JSON file, you can configure internal subnets using any or all of the options
join
,masquerade
, andtransit
, along with a single CIDR per option, as shown in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteOVN-Kubernetes reserves the following IP address ranges:
100.64.0.0/16
. This IP address range is used for theinternalJoinSubnet
parameter of OVN-Kubernetes by default.100.88.0.0/16
. This IP address range is used for theinternalTransSwitchSubnet
parameter of OVN-Kubernetes by default.If these IP addresses have been used by OpenShift SDN or any external networks that might communicate with this cluster, you must patch them to use a different IP address range before initiating the limited live migration. For more information, see Patching OVN-Kubernetes address ranges in the Additional resources section.
To initiate the migration, run the following post request in a terminal window:
ocm post /api/clusters_mgmt/v1/clusters/{cluster_id}/migrations
$ ocm post /api/clusters_mgmt/v1/clusters/{cluster_id}/migrations
1 --body=myjsonfile.json
2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
To check the status of the migration, run the following command:
ocm get cluster <cluster_id>/migrations
$ ocm get cluster <cluster_id>/migrations
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<cluster_id>
with the ID of the cluster that the migration was applied to.
Additional resources
Chapter 3. Configuring a cluster-wide proxy Copiar o linkLink copiado para a área de transferência!
If you are using an existing Virtual Private Cloud (VPC), you can configure a cluster-wide proxy during an OpenShift Dedicated cluster installation or after the cluster is installed. When you enable a proxy, the core cluster components are denied direct access to the internet, but the proxy does not affect user workloads.
Only cluster system egress traffic is proxied, including calls to the cloud provider API.
You can enable a proxy only for OpenShift Dedicated clusters that use the Customer Cloud Subscription (CCS) model.
If you use a cluster-wide proxy, you are responsible for maintaining the availability of the proxy to the cluster. If the proxy becomes unavailable, then it might impact the health and supportability of the cluster.
3.1. Prerequisites for configuring a cluster-wide proxy Copiar o linkLink copiado para a área de transferência!
To configure a cluster-wide proxy, you must meet the following requirements. These requirements are valid when you configure a proxy during installation or postinstallation.
3.1.1. General requirements Copiar o linkLink copiado para a área de transferência!
- You are the cluster owner.
- Your account has sufficient privileges.
- You have an existing Virtual Private Cloud (VPC) for your cluster.
- You are using the Customer Cloud Subscription (CCS) model for your cluster.
- The proxy can access the VPC for the cluster and the private subnets of the VPC. The proxy must also be accessible from the VPC for the cluster and from the private subnets of the VPC.
You have added the following endpoints to your VPC endpoint:
-
ec2.<aws_region>.amazonaws.com
-
elasticloadbalancing.<aws_region>.amazonaws.com
s3.<aws_region>.amazonaws.com
These endpoints are required to complete requests from the nodes to the AWS EC2 API. Because the proxy works at the container level and not at the node level, you must route these requests to the AWS EC2 API through the AWS private network. Adding the public IP address of the EC2 API to your allowlist in your proxy server is not enough.
ImportantWhen using a cluster-wide proxy, you must configure the
s3.<aws_region>.amazonaws.com
endpoint as typeGateway
.
-
3.1.2. Network requirements Copiar o linkLink copiado para a área de transferência!
If your proxy re-encrypts egress traffic, you must create exclusions to several domain and port combinations required by OpenShift.
Your proxy must exclude re-encrypting the following OpenShift URLs:
Address | Protocol/Port | Function |
---|---|---|
| https/443 | Required. Used for Managed OpenShift-specific telemetry. |
| https/443 |
The https://console.redhat.com/openshift site uses authentication from |
3.2. Responsibilities for additional trust bundles Copiar o linkLink copiado para a área de transferência!
If you supply an additional trust bundle, you are responsible for the following requirements:
- Ensuring that the contents of the additional trust bundle are valid
- Ensuring that the certificates, including intermediary certificates, contained in the additional trust bundle have not expired
- Tracking the expiry and performing any necessary renewals for certificates contained in the additional trust bundle
- Updating the cluster configuration with the updated additional trust bundle
3.3. Configuring a proxy during installation Copiar o linkLink copiado para a área de transferência!
You can configure an HTTP or HTTPS proxy when you install an OpenShift Dedicated with Customer Cloud Subscription (CCS) cluster into an existing Virtual Private Cloud (VPC). You can configure the proxy during installation by using Red Hat OpenShift Cluster Manager.
3.3.1. Configuring a proxy during installation using OpenShift Cluster Manager Copiar o linkLink copiado para a área de transferência!
If you are installing an OpenShift Dedicated cluster into an existing Virtual Private Cloud (VPC), you can use Red Hat OpenShift Cluster Manager to enable a cluster-wide HTTP or HTTPS proxy during installation. You can enable a proxy only for clusters that use the Customer Cloud Subscription (CCS) model.
Prior to the installation, you must verify that the proxy is accessible from the VPC that the cluster is being installed into. The proxy must also be accessible from the private subnets of the VPC.
For detailed steps to configure a cluster-wide proxy during installation by using OpenShift Cluster Manager, see Creating a cluster on AWS or Creating a cluster on GCP.
3.4. Configuring a proxy after installation Copiar o linkLink copiado para a área de transferência!
You can configure an HTTP or HTTPS proxy after you install an OpenShift Dedicated with Customer Cloud Subscription (CCS) cluster into an existing Virtual Private Cloud (VPC). You can configure the proxy after installation by using Red Hat OpenShift Cluster Manager.
3.5. Configuring a proxy after installation using OpenShift Cluster Manager Copiar o linkLink copiado para a área de transferência!
You can use Red Hat OpenShift Cluster Manager to add a cluster-wide proxy configuration to an existing OpenShift Dedicated cluster in a Virtual Private Cloud (VPC). You can enable a proxy only for clusters that use the Customer Cloud Subscription (CCS) model.
You can also use OpenShift Cluster Manager to update an existing cluster-wide proxy configuration. For example, you might need to update the network address for the proxy or replace the additional trust bundle if any of the certificate authorities for the proxy expire.
The cluster applies the proxy configuration to the control plane and compute nodes. While applying the configuration, each cluster node is temporarily placed in an unschedulable state and drained of its workloads. Each node is restarted as part of the process.
Prerequisites
- You have an OpenShift Dedicated cluster that uses the Customer Cloud Subscription (CCS) model.
- Your cluster is deployed in a VPC.
Procedure
- Navigate to OpenShift Cluster Manager and select your cluster.
- Under the Virtual Private Cloud (VPC) section on the Networking page, click Edit cluster-wide proxy.
On the Edit cluster-wide proxy page, provide your proxy configuration details:
Enter a value in at least one of the following fields:
- Specify a valid HTTP proxy URL.
- Specify a valid HTTPS proxy URL.
In the Additional trust bundle field, provide a PEM encoded X.509 certificate bundle.
If you are replacing an existing trust bundle file, select Replace file to view the field. The bundle is added to the trusted certificate store for the cluster nodes. An additional trust bundle file is required if you use a TLS-inspecting proxy unless the identity certificate for the proxy is signed by an authority from the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle. This requirement applies regardless of whether the proxy is transparent or requires explicit configuration using the
http-proxy
andhttps-proxy
arguments.
- Click Confirm.
Verification
- Under the Virtual Private Cloud (VPC) section on the Networking page, verify that the proxy configuration for your cluster is as expected.
Chapter 4. Enabling multicast for a project Copiar o linkLink copiado para a área de transferência!
4.1. About multicast Copiar o linkLink copiado para a área de transferência!
With IP multicast, data is broadcast to many IP addresses simultaneously.
- At this time, multicast is best used for low-bandwidth coordination or service discovery and not a high-bandwidth solution.
-
By default, network policies affect all connections in a namespace. However, multicast is unaffected by network policies. If multicast is enabled in the same namespace as your network policies, it is always allowed, even if there is a
deny-all
network policy. Cluster administrators should consider the implications to the exemption of multicast from network policies before enabling it.
Multicast traffic between OpenShift Dedicated pods is disabled by default. If you are using the OVN-Kubernetes network plugin, you can enable multicast on a per-project basis.
4.2. Enabling multicast between pods Copiar o linkLink copiado para a área de transferência!
You can enable multicast between pods for your project.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
You must log in to the cluster with a user that has the
cluster-admin
or thededicated-admin
role.
Procedure
Run the following command to enable multicast for a project. Replace
<namespace>
with the namespace for the project you want to enable multicast for.oc annotate namespace <namespace> \ k8s.ovn.org/multicast-enabled=true
$ oc annotate namespace <namespace> \ k8s.ovn.org/multicast-enabled=true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipYou can alternatively apply the following YAML to add the annotation:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
To verify that multicast is enabled for a project, complete the following procedure:
Change your current project to the project that you enabled multicast for. Replace
<project>
with the project name.oc project <project>
$ oc project <project>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a pod to act as a multicast receiver:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a pod to act as a multicast sender:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In a new terminal window or tab, start the multicast listener.
Get the IP address for the Pod:
POD_IP=$(oc get pods mlistener -o jsonpath='{.status.podIP}')
$ POD_IP=$(oc get pods mlistener -o jsonpath='{.status.podIP}')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start the multicast listener by entering the following command:
oc exec mlistener -i -t -- \ socat UDP4-RECVFROM:30102,ip-add-membership=224.1.0.1:$POD_IP,fork EXEC:hostname
$ oc exec mlistener -i -t -- \ socat UDP4-RECVFROM:30102,ip-add-membership=224.1.0.1:$POD_IP,fork EXEC:hostname
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Start the multicast transmitter.
Get the pod network IP address range:
CIDR=$(oc get Network.config.openshift.io cluster \ -o jsonpath='{.status.clusterNetwork[0].cidr}')
$ CIDR=$(oc get Network.config.openshift.io cluster \ -o jsonpath='{.status.clusterNetwork[0].cidr}')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To send a multicast message, enter the following command:
oc exec msender -i -t -- \ /bin/bash -c "echo | socat STDIO UDP4-DATAGRAM:224.1.0.1:30102,range=$CIDR,ip-multicast-ttl=64"
$ oc exec msender -i -t -- \ /bin/bash -c "echo | socat STDIO UDP4-DATAGRAM:224.1.0.1:30102,range=$CIDR,ip-multicast-ttl=64"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If multicast is working, the previous command returns the following output:
mlistener
mlistener
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Legal Notice
Copiar o linkLink copiado para a área de transferência!
Copyright © 2025 Red Hat
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.