This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.Chapter 13. OVN-Kubernetes default CNI network provider
13.1. About the OVN-Kubernetes default Container Network Interface (CNI) network provider リンクのコピーリンクがクリップボードにコピーされました!
The OpenShift Container Platform cluster uses a virtualized network for pod and service networks. The OVN-Kubernetes Container Network Interface (CNI) plug-in is a network provider for the default cluster network.
13.1.1. OVN-Kubernetes features リンクのコピーリンクがクリップボードにコピーされました!
The OVN-Kubernetes default Container Network Interface (CNI) network provider implements the following features:
- Uses OVN (Open Virtual Network) to manage network traffic flows. OVN is a community developed, vendor agnostic network virtualization solution.
- Implements Kubernetes network policy support, including ingress and egress rules.
- Uses the Geneve (Generic Network Virtualization Encapsulation) protocol rather than VXLAN to create an overlay network between nodes.
13.1.2. Supported default CNI network provider feature matrix リンクのコピーリンクがクリップボードにコピーされました!
OpenShift Container Platform offers two supported choices, OpenShift SDN and OVN-Kubernetes, for the default Container Network Interface (CNI) network provider. The following table summarizes the current feature support for both network providers:
| Feature | OVN-Kubernetes [1] | OpenShift SDN |
|---|---|---|
| Egress IPs | Not supported | Supported |
| Egress firewall [2] | Not supported | Supported |
| Egress router | Not supported | Supported |
| Kubernetes network policy | Supported | Partially supported [3] |
| Multicast | Supported | Supported |
- Available only as a Technology Preview feature in OpenShift Container Platform 4.5.
- Egress firewall is also known as egress network policy in OpenShift SDN. This is not the same as network policy egress.
-
Does not support egress rules and some
ipBlockrules.
13.1.3. Exposed metrics for OVN-Kubernetes リンクのコピーリンクがクリップボードにコピーされました!
The OVN-Kubernetes default Container Network Interface (CNI) network provider exposes certain metrics for use by the Prometheus-based OpenShift Container Platform cluster monitoring stack.
| Name | Description |
|---|---|
|
| The latency between when a pod is created and when the pod is annotated by OVN-Kubernetes. The higher the latency, the more time that elapses before a pod is available for network connectivity. |
Additional resources
13.2. Migrate from the OpenShift SDN default CNI network provider リンクのコピーリンクがクリップボードにコピーされました!
As a cluster administrator, you can migrate to the OVN-Kubernetes default Container Network Interface (CNI) network provider from the OpenShift SDN default CNI network provider.
The Open Virtual Networking (OVN) Kubernetes network plug-in is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of the OVN Technology Preview, see https://access.redhat.com/articles/4380121.
To learn more about OVN-Kubernetes, read About the OVN-Kubernetes network provider.
13.2.1. Migrating to the OVN-Kubernetes default CNI network provider リンクのコピーリンクがクリップボードにコピーされました!
As a cluster administrator, you can change the default Container Network Interface (CNI) network provider for your cluster to OVN-Kubernetes. During the migration, you must reboot every node in your cluster.
While performing the migration, your cluster is unavailable and workloads might be interrupted. Perform the migration only when an interruption in service is acceptable.
Prerequisites
-
Install the OpenShift CLI (
oc). -
Access to the cluster as a user with the
cluster-adminrole. - A cluster installed on bare metal infrastructure configured with the OpenShift SDN default CNI network provider.
- The cluster is in a known good state, without any errors.
Procedure
To backup the configuration for the cluster network, enter the following command:
oc get Network.config.openshift.io cluster -o yaml > cluster-openshift-sdn.yaml
$ oc get Network.config.openshift.io cluster -o yaml > cluster-openshift-sdn.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow To enable the migration, set an annotation on the Cluster Network Operator configuration object by entering the following command:
oc annotate Network.operator.openshift.io cluster \ 'networkoperator.openshift.io/network-migration'=""
$ oc annotate Network.operator.openshift.io cluster \ 'networkoperator.openshift.io/network-migration'=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow To change the default CNI network provider, enter the following command:
oc patch Network.config.openshift.io cluster \ --type='merge' --patch '{ "spec": { "networkType": "OVNKubernetes" } }'$ oc patch Network.config.openshift.io cluster \ --type='merge' --patch '{ "spec": { "networkType": "OVNKubernetes" } }'Copy to Clipboard Copied! Toggle word wrap Toggle overflow To confirm the migration disabled the OpenShift SDN default CNI network provider and removed all OpenShift SDN pods, enter the following command. It might take several moments for all the OpenShift SDN pods to stop.
watch oc get pod -n openshift-sdn
$ watch oc get pod -n openshift-sdnCopy to Clipboard Copied! Toggle word wrap Toggle overflow To complete the migration, reboot each node in your cluster. For example, you could use a bash script similar to the following. The script assumes that you can connect to each host by using
sshand that you have configuredsudoto not prompt for a password.Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the nodes in your cluster have rebooted, confirm that the migration succeeded:
To confirm that the default CNI network provider is OVN-Kubernetes, enter the following command. The value of
status.networkTypemust beOVNKubernetes.oc get network.config/cluster -o jsonpath='{.status.networkType}{"\n"}'$ oc get network.config/cluster -o jsonpath='{.status.networkType}{"\n"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow To confirm that the cluster nodes are in the
Readystate, enter the following command:oc get nodes
$ oc get nodesCopy to Clipboard Copied! Toggle word wrap Toggle overflow If a node is stuck in the
NotReadystate, reboot the node again.To confirm that your pods are not in an error state, enter the following command:
oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}'$ oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow If pods on a node are in an error state, reboot that node.
Complete the following steps only if the migration succeeds and your cluster is in a good state:
To remove the migration annotation from the Cluster Network Operator configuration object, enter the following command:
oc annotate Network.operator.openshift.io cluster \ networkoperator.openshift.io/network-migration-
$ oc annotate Network.operator.openshift.io cluster \ networkoperator.openshift.io/network-migration-Copy to Clipboard Copied! Toggle word wrap Toggle overflow To remove the OpenShift SDN network provider namespace, enter the following command:
oc delete namespace openshift-sdn
$ oc delete namespace openshift-sdnCopy to Clipboard Copied! Toggle word wrap Toggle overflow
13.3. Rollback to the OpenShift SDN network provider リンクのコピーリンクがクリップボードにコピーされました!
As a cluster administrator, you can rollback to the OpenShift SDN cluster default Container Network Interface (CNI) provider from the OVN-Kubernetes default CNI network provider if the migration to OVN-Kubernetes unsuccessful.
13.3.1. Rolling back the default CNI network provider to OpenShift SDN リンクのコピーリンクがクリップボードにコピーされました!
As a cluster administrator, you can rollback your cluster to the OpenShift SDN default Container Network Interface (CNI) network provider. During the rollback, you must reboot every node in your cluster.
Only rollback to OpenShift SDN if the migration to OVN-Kubernetes is unsuccessful.
Prerequisites
-
Install the OpenShift CLI (
oc). -
Access to the cluster as a user with the
cluster-adminrole. A cluster installed on bare metal infrastructure configured with the OVN-Kubernetes default CNI network provider.
To enable the migration, set an annotation on the Cluster Network Operator configuration object by entering the following command:
oc annotate Network.operator.openshift.io cluster \ 'networkoperator.openshift.io/network-migration'=""
$ oc annotate Network.operator.openshift.io cluster \ 'networkoperator.openshift.io/network-migration'=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow To change the default CNI network provider, enter the following command:
oc patch Network.config.openshift.io cluster \ --type='merge' --patch '{ "spec": { "networkType": "OpenShiftSDN" } }'$ oc patch Network.config.openshift.io cluster \ --type='merge' --patch '{ "spec": { "networkType": "OpenShiftSDN" } }'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Use the backup of the cluster network configuration that you created before the migration to restore any customizations to the network configuration that you might have made. To restore the customizations, enter the following command to edit the Cluster Network Operator configuration:
oc edit Network.config.openshift.io cluster
$ oc edit Network.config.openshift.io clusterCopy to Clipboard Copied! Toggle word wrap Toggle overflow To confirm that the migration disabled the OVN-Kubernetes default CNI network provider and removed all the OVN-Kubernetes pods, enter the following command. It might take several moments for all the OVN-Kubernetes pods to stop.
watch oc get pod -n openshift-ovn-kubernetes
$ watch oc get pod -n openshift-ovn-kubernetesCopy to Clipboard Copied! Toggle word wrap Toggle overflow To complete the rollback, reboot each node in your cluster. For example, you could use a bash script similar to the following. The script assumes that you can connect to each host by using
sshand that you have configuredsudoto not prompt for a password.Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the nodes in your cluster have rebooted, enter the following command to confirm that the default CNI network provider is OpenShift SDN. The value of
status.networkTypemust beOpenShiftSDN.oc get network.config/cluster -o jsonpath='{.status.networkType}{"\n"}'$ oc get network.config/cluster -o jsonpath='{.status.networkType}{"\n"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow To confirm that the OpenShift SDN pods are in the
READYstate, enter the following command:oc get pod -n openshift-sdn --watch
$ oc get pod -n openshift-sdn --watchCopy to Clipboard Copied! Toggle word wrap Toggle overflow To remove the migration annotation from the Cluster Network Operator configuration object, enter the following command:
oc annotate Network.operator.openshift.io cluster \ networkoperator.openshift.io/network-migration-
$ oc annotate Network.operator.openshift.io cluster \ networkoperator.openshift.io/network-migration-Copy to Clipboard Copied! Toggle word wrap Toggle overflow To remove the OVN-Kubernetes network provider namespace, enter the following command:
oc delete namespace openshift-ovn-kubernetes
$ oc delete namespace openshift-ovn-kubernetesCopy to Clipboard Copied! Toggle word wrap Toggle overflow
13.4. Enabling multicast for a project リンクのコピーリンクがクリップボードにコピーされました!
The Open Virtual Networking (OVN) Kubernetes network plug-in is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of the OVN Technology Preview, see https://access.redhat.com/articles/4380121.
In OpenShift Container Platform 4.5, a bug prevents Pods in the same namespace, but assigned to different nodes, from communicating over multicast. For more information, see BZ#1843695.
13.4.1. About multicast リンクのコピーリンクがクリップボードにコピーされました!
With IP multicast, data is broadcast to many IP addresses simultaneously.
At this time, multicast is best used for low-bandwidth coordination or service discovery and not a high-bandwidth solution.
Multicast traffic between OpenShift Container Platform pods is disabled by default. If you are using the OVN-Kubernetes default Container Network Interface (CNI) network provider, you can enable multicast on a per-project basis.
13.4.2. Enabling multicast between pods リンクのコピーリンクがクリップボードにコピーされました!
You can enable multicast between pods for your project.
Prerequisites
-
Install the OpenShift CLI (
oc). -
You must log in to the cluster with a user that has the
cluster-adminrole.
Procedure
Run the following command to enable multicast for a project. Replace
<namespace>with the namespace for the project you want to enable multicast for.oc annotate namespace <namespace> \ k8s.ovn.org/multicast-enabled=true$ oc annotate namespace <namespace> \ k8s.ovn.org/multicast-enabled=trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
To verify that multicast is enabled for a project, complete the following procedure:
Change your current project to the project that you enabled multicast for. Replace
<project>with the project name.oc project <project>
$ oc project <project>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a pod to act as a multicast receiver:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a pod to act as a multicast sender:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start the multicast listener.
Get the IP address for the Pod:
POD_IP=$(oc get pods mlistener -o jsonpath='{.status.podIP}')$ POD_IP=$(oc get pods mlistener -o jsonpath='{.status.podIP}')Copy to Clipboard Copied! Toggle word wrap Toggle overflow To start the multicast listener, in a new terminal window or tab, enter the following command:
oc exec mlistener -i -t -- \ socat UDP4-RECVFROM:30102,ip-add-membership=224.1.0.1:$POD_IP,fork EXEC:hostname$ oc exec mlistener -i -t -- \ socat UDP4-RECVFROM:30102,ip-add-membership=224.1.0.1:$POD_IP,fork EXEC:hostnameCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Start the multicast transmitter.
Get the pod network IP address range:
CIDR=$(oc get Network.config.openshift.io cluster \ -o jsonpath='{.status.clusterNetwork[0].cidr}')$ CIDR=$(oc get Network.config.openshift.io cluster \ -o jsonpath='{.status.clusterNetwork[0].cidr}')Copy to Clipboard Copied! Toggle word wrap Toggle overflow To send a multicast message, enter the following command:
oc exec msender -i -t -- \ /bin/bash -c "echo | socat STDIO UDP4-DATAGRAM:224.1.0.1:30102,range=$CIDR,ip-multicast-ttl=64"$ oc exec msender -i -t -- \ /bin/bash -c "echo | socat STDIO UDP4-DATAGRAM:224.1.0.1:30102,range=$CIDR,ip-multicast-ttl=64"Copy to Clipboard Copied! Toggle word wrap Toggle overflow If multicast is working, the previous command returns the following output:
mlistener
mlistenerCopy to Clipboard Copied! Toggle word wrap Toggle overflow
13.5. Disabling multicast for a project リンクのコピーリンクがクリップボードにコピーされました!
The Open Virtual Networking (OVN) Kubernetes network plug-in is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of the OVN Technology Preview, see https://access.redhat.com/articles/4380121.
13.5.1. Disabling multicast between pods リンクのコピーリンクがクリップボードにコピーされました!
You can disable multicast between pods for your project.
Prerequisites
-
Install the OpenShift CLI (
oc). -
You must log in to the cluster with a user that has the
cluster-adminrole.
Procedure
Disable multicast by running the following command:
oc annotate namespace <namespace> \ k8s.ovn.org/multicast-enabled-$ oc annotate namespace <namespace> \1 k8s.ovn.org/multicast-enabled-Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
namespacefor the project you want to disable multicast for.